Wednesday, December 27, 2017

Making of Santa’s Search

I am posting this early today as I am going to The Last Jedi tonight so will be busy. I have decided that for next year I will not be interrupting my emulator project for game material unless it is for a game jam that I am participating in. Material relating to resurrecting Blazing Games games and the revised version of my Flash Game Development book will be on my resurrected Blazing Games development blog which I will be starting this Saturday. Now onto Santa’s Search.

When I originally create Santa’s Search, Flash did not have any 3D support at all. Worse yet, it didn’t have direct bitmap support so manually building an image was not possible. My Coffee Quest series was already using ray casting. The CQ ray caster had only partial perspective correction but after taking graphic programming courses I realize that the solution I found was simply wrong and that the better approach would simply do a world transform to make every ray act as if it were the center ray. But that is way off topic. The point I was trying to make is that at the time I couldn’t do ray casting so I went with a more primitive algorithm known as the painter’s algorithm.

The idea here is that you paint the things furthest from the player first then overpaint the items with things that are closer to the player. The downside to this approach is that you are potentially overdrawing the same pixel multiple times. As this is not a real-time game, this is not a huge concern so the painter’s algorithm works fine.


As you can see by the sprite atlas above, the game has three different blocks that it uses to draw the maze. A left-side block, a center block, and a right-side block. These are chosen based on the position of the block relative to the player’s viewpoint. Had I been a bit more ambitious with saving image atlas space, I could have had all three blocks combined into a single distorted block and have the sprite take just the appropriate chunk of the combined image. The deer and sled images are the same no matter which position the player is looking at them.

The original game was controlled using the keyboard. As mentioned in previous posts, I want the game to be able to be playable on touch devices. As the only controls are turning and moving, adding simple arrow buttons was the obvious choice. I was originally going to go with transparent arrows that turned solid when clicked on, but felt that the outlined arrows simply looked better so went with those.

The hard part of porting this game was assembling the sprite sheet as Flash CS 6 did extremely poor so I had to adjust the sheet manually then figure out the coordinates myself. The code was very quick to port and to my shock worked nearly flawlessly once all the “this” statements were added. Half a day’s worth of work and I have a playable game. Not too bad for porting. Let us hope that my future Flash to HTML5 ports go as smoothly as this one did.

See you next year!

Wednesday, December 20, 2017

Making Evil Gnome’s Revenge

Before I get to my game overview a brief update on my 2600 project. The assembler is functional now and while I do want to add more directives to it, will start on implementing the 6502 emulation adding new directives to the assembler as needed. I have a number of articles on my 2600 assembler ready and will be posting them next year. Now back to the evil gnome.

Evil Gnome’s Revenge was my 2012 game jam entry for Ludum Dare #25 which had the theme “You are the Villain.” At the time I really didn’t want to go down any of the dark paths that I had thought of and having had created both Santa’s Search and Santa’s Snowball years earlier I thought this would be a light-hearted way of making the player the villain without having them do anything outrageous. I can’t say that the herding concept was original as there are a number of games that have a similar mechanism with Munch’s Oddysee probably being the biggest influence.

The development time on this port was pretty abysmal as I made the cardinal mistake of working on it while watching TV. While I know that the human brain is not designed for multitasking, the lackluster performance I had while coding this clearly emphasizes how poor we humans are. The strange thing is that I normally have podcasts or other background noise playing so I had expected to have my normal development speed. When coding, however, I tend to tune out what I am listening to while the TV tended to take most of my attention as I got caught up with my shows before losing the contents of my PVR. The big lesson being that you shouldn’t let your recorded backlog build up that much. Coding is best done without distractions could also be considered a lesson if I didn’t already know that.

The first surprise was that the game was created using the Starling library. This is a molehill library that takes advantage of the 3D capabilities that later versions of the Flash player added.  When I saw the starling imports I thought that perhaps I should consider StageGL, the WebGL based renderer that is in testing now but will eventually replace the stage.js library.  The map for the game, which is procedurally generated, is 64x64 with each tile being a movie clip that goes to the appropriate frame to display the tile at that location. Once set, however, the tiles do not change. Having 4096 sprites on the screen is probably not very good for performance, but by creating a 2048x2048 pixel cache then there is only one background image that the renderer is tracking giving the game 60 fps performance on decent machines. It is easy to cache sprites, as the following code shows.

// for speed reasons, cache the tilemap
this.tileMap.cache(0,0, 64*32, 64*32);

One of the substantial changes that I am making to all the games that I am porting is to make sure that the games work with touch interfaces. Touch input meant rewriting how the player moves as the original game used cursor or WASD for movement. The obvious solution was to simply have the player click to where they wanted to move to and head in that direction. The obstacles in the map, however, made this a bit tricky. A path-finding algorithm could have been used but I didn’t feel like writing one so I opted for a simple slide along the object system. This simply checks to see if the player is stuck and if so tries moving on just one of the axis. While this method still lets the player get stuck, most of the time the player will move around the obstacle.

if (this.canEnterTile(targX, targY, mover) == false) {
if (this.canEnterTile(targX, mover.worldY, mover) == false) {
if (this.canEnterTile(mover.worldX, targY, mover) == false) {
// we are stuck!
} else {
mover.worldY = targY;
}
} else {
mover.worldX = targX;
}
} else {
mover.worldX = targX;
mover.worldY = targY;
}

This is where things got strange. The game seemed to work fine with the change until the mouse was moved at which point the game stalled for a few seconds. Out of curiosity, I removed all the tiles from the tilemap image replacing It with a 2048x2048 shape. This solved the delay problem meaning that the problem was with how create.js handles clicking. All the tiles are being checked with a pixel accurate test to see if the tile is affected by the mouse event. The solution was to not have the game screen process the mouse event but grab the mouse events at the higher stage level then manually determine which tile was clicked on. When you are dealing with thousands of objects, different techniques must be used.

var clickX = e.stageX
var clickY = e.stageY;
this.gnome.targetX = (clickX - this.tileMap.x) / 32;
this.gnome.targetY = (clickY - this.tileMap.y) / 32;

With the major issues out of the way, this was not a difficult game to port. This only leaves the final game in the series which will be covered next week and then we can get back to my 2600 emulator.

Wednesday, December 13, 2017

Making Santa’s Snowball part 3 – tweens and gameplay

I was able to finish and post the second game in this years Santa’s trilogy and got the final game in the trilogy running. Tonight I will be cleaning up the game in ready for release next Friday. With my Christmas obligation finished I will be able to get back to the emulator, though next week will be a making of Evil Gnome’s Revenge post and the Wednesday after that will be a post about the making of Santa’s Search.

One of the important things I wanted to do while creating this project was work with tween.js as tweening is very important for animation and can be useful for all sorts of other things. The term tween comes from animation as a shorthand word for in-between. The lead animator would draw the key frames of an animation then other animators would draw all the frames in-between the two keyframes. Computers are now taking care of the in-between frames, which is why tools like Animate are still popular even though the Flash player is fading away.

Tween.js is not tied strictly to animation but can be used on any class to change the value of a variable over time. The associative array nature of JavaScript is what allows for this, but similar functionality can be done in C by using a pointer to the variable being changed. For Santa’s Snowball, tweening is only used for animation but it is not restricted to that. One thing that I could have done had I thought of it at the time was have the change of score be done at the moment of snowball impact instead of when the snowball was thrown by setting up a tween.

Tweening is an important part of the game as I use it for handling the turning of the billboards as well as for the snowballs. My billboard works by having a blank billboard shape (a white rectangle), a Santa image, and an evil Gnome image. This allows for three states of the billboard namely empty, Santa, or Evil Gnome. The show board function lets you indicate the type of board to show and hides Santa and/or the Evil Gnome based on the provided board type. The billboard, Santa, and the evil gnome are all then tweened into going from a shrunken width to their proper size. Hiding is simply shrinking the three of them which takes advantage of the fact that the unused images are already hidden. Here is the code for that.

/* shows the billboard appearing animation using the indicated type */
this.showBillboard = function(type) {
this.santa.visible = type == 1;
this.gnome.visible = type == 2;
cjs.Tween.get(this.shape).to({scaleX:.1}, 0).to({scaleX:1}, 500);
cjs.Tween.get(this.santa).to({scaleX:.1}, 0).to({scaleX:1}, 500);
cjs.Tween.get(this.gnome).to({scaleX:.1}, 0).to({scaleX:1}, 500);
}

/* shows the billboard disappearing animation */
this.hideBillboard = function() {
cjs.Tween.get(this.shape).to({scaleX:1}, 0).to({scaleX:.01}, 500);
cjs.Tween.get(this.santa).to({scaleX:1}, 0).to({scaleX:.01}, 500);
cjs.Tween.get(this.gnome).to({scaleX:1}, 0).to({scaleX:.01}, 500);
}

Testing this code did turn up a strange issue. When I had more than one billboard, all of them would show the same image. This was due to the fact that I am overriding the Easel.js’s Container class but forgot to call the initialize method. The poor way that classes are handled is not one of the better features of JavaScript but at least future versions do improve this. That being said, web assembly is probably the direction that I will be going in the future.

While tween.js does support motion paths, it does require a plugin be installed. By plugin, I mean within the create.js code not with the browser. This would not have been difficult to do, but since I would also have to come up with the curves to use I decided I would have a bit of fun and write a simple path generator for snowballs. I break the snowball path into three parts, with the line to the target being used as the base path. A third of the length is added to the point a third of the way through the line as well as 2/3 the way through to form a rudimentary curve. Each third of the line takes a slightly different amount of time to traverse so the snowball will move faster nearer to the player and slower further away. In addition to the path, the size of the snowball is also being tweened.  The code for handling the throwing of a snowball is below.

// create a path the hard way as didn't want to bother with motion paths
var endx = this.billboards[indx].x;
var endy = this.billboards[indx].y;
var dx = endx - 320;
var dy = endy - 440;
var linelen = Math.sqrt(dx * dx + dy * dy);
var thirdlen = linelen / 3;
var mid1x = 320 + dx / 3;
var mid1y = 440 + dy / 3 - thirdlen;
var mid2x = 320 + dx / 3 * 2;
var mid2y = 440 + dy / 3 * 2 - thirdlen;
this.snowball.visible = true;
cjs.Tween.get(this.snowball).to({x: 320, y:440, scaleX:1, scaleY:1}, 0).to
({x: mid1x, y:mid1y, scaleX:.25, scaleY:.25},200).to
({x: mid2x, y:mid2y, scaleX:.1, scaleY:.1},300).to
({x: endx, y:endy, scaleX:.02, scaleY:.02},400).call(this.playSplatSound).to({visible:false},1);
// animation ends with a splat so set up the 8 splat animations as well.
for (var cntrSplat = 0; cntrSplat < 8; ++cntrSplat) {
cjs.Tween.get(this.splats[cntrSplat]).wait(899).to
({x:endx, y:endy, scaleX:.02, scaleY:.02, visible:true},1).to
({x:endx+this.splatOffsetX[cntrSplat], y:endy+this.splatOffsetY[cntrSplat],scaleX:.005, scaleY:.005},300).to
({visible:false},1);
}
this.doneBillboardTime = Date.now() + 900;

I suppose I could have used a similar technique to generate the curve, but this approach works well. I am liking the Create.js library finding it not that much more difficult to use than Animate making it a better choice for developing games in JavaScript. As web assembly starts to take off, however, it is likely that my future will be either using Kotlin or C++ and compiling them to web assembly. Still, I have many flash games to port with most of them being easier to port using Create.js than porting to another language but there will be a few that may be worth the effort to re-write.

Wednesday, December 6, 2017

Making Santa’s Snowball part 2 - Multiple Scenes

Not much progress was made as I am upgrading my PVR so the weekend and last few evenings were spent getting caught up with my tv shows as I will lose everything on the PVR when the swap is done. I suppose this is a mixed blessing as I have been overdoing things lately so having a forced break was probably good but that means that the port of Evil Gnome’s Revenge may be delayed. I was hoping to post the game this weekend, and still may be able to, but working while watching TV is not very productive.  Still, either this weekend or next weekend will be the second game in my Santa trilogy where you play the bad guy and need to chase Santa’s reindeer out of their corral.

Last week we looked at the preloading of the assets. With all the assets ready to be used, we still have the problem of dealing with multiple scenes. Animate CC 2017 does not handle multiple scenes at all so my solution when creating multi-scene projects in Animate was to simply put each scene in its own MovieClip and then have the main timeline have keyframes with each of the difference scene movies in it. When you think about it, if you have total control over the stage then having a MovieClip that contains all the scenes is not necessary as you can simply have a single scene MovieClip on the stage at a time. This has the additional benefit that you don’t have to worry about removing event listeners as when a MovieClip is not on the stage it will not be getting the events. The following is my switching code:

spelchan.SantaSnowball.switchScene = function(newScene, e) {
var stage = spelchan.SantaSnowball.stage;
stage.removeChildAt(0);
switch(newScene) {
case spelchan.SantaSnowball.TITLE_SCENE:
stage.addChild(spelchan.SantaSnowball.titleScene);
break;

case spelchan.SantaSnowball.PLAY_GAME_SCENE:
stage.addChild(spelchan.SantaSnowball.playScene);
break;

case spelchan.SantaSnowball.ABOUT_SCENE:
stage.addChild(spelchan.SantaSnowball.aboutScene);
break;

case spelchan.SantaSnowball.INSTRUCTIONS_SCENE:
stage.addChild(spelchan.SantaSnowball.instructionsScene);
break;
case spelchan.SantaSnowball.WIN_SCENE:
stage.addChild(spelchan.SantaSnowball.winScene);
break;

case spelchan.SantaSnowball.LOSE_SCENE:
stage.addChild(spelchan.SantaSnowball.loseScene);
break;

case spelchan.SantaSnowball.BIG_LOSE_SCENE:
stage.addChild(spelchan.SantaSnowball.bigLoseScene);
break;

}
}

In hindsight, this is not the best way of doing this. I have come up with two better ways of doing this. The first way is to have an array of scenes and create each scene using the numerical constant as an index into this array. A simple verification that the index is valid then remove the old scene and add the new scene to the stage. Much easier code. I use numerical constants for efficiency but when you consider that this function is not being called that often and the constants are fairly large chunks of text that are transmitted it may be more practical just to use string names for the various scenes. As JavaScript uses associative arrays for variables, you could simply tie the scenes to actual variables within the class for your game. This is the approach I will be taking for the second game in this trilogy.

As can be discerned from the code, there are seven scenes that make up this game. Five of these scenes just consist of images and text with a button leading back to the title screen. The title screen has a slight bit of animation, plays a musical clip, then has three buttons for navigating to other scenes. The title screen actually is a good demonstration of one of the problems with my scene switching system. The movie clips do not restart when they are switched to. This could be forced by the switcher by simply calling a gotoAndPlay method but this would require that all the scenes have a gotoAndPlay method, which is not that big of an issue but something that needs to be considered.


An example of when you may have a scene that is not a movie clip is my Billboard class. This is created as a container that holds three objects. The class does do some animation but instead of making this animation frame based, it is controlled by separate function calls which take advantage of tweens to perform the animation. This allows for the billboards to remain open for a different amount of time based on the level of the game. I even take advantage of their flexibility by having the act of throwing a snowball adjust the closing timing of the billboard.

More on that next week where we finish off the game and take a look at the tween.js class in more detail.

Wednesday, November 29, 2017

Making Santa’s Snowball part 1 - Sprite sheets and Preloading

My making of Santa’s Snowball was originally going to be just a single article but as I was covering so much of the various Create.js librarys the article kind of grew much longer than I had anticipated. This means that it will be broken into multiple parts which will be immediately followed by the making of game 2 in my Santa trilogy (to be announced next week) and that will probably be followed by the making of game 3 which means that I will not be able to get back to my creation of an assembler until late December, though I will have some progress reports if significant progress is made.  

Santa's Snowball was an interesting project as I wanted to have a Christmas game for this year and noticed that there were three games that formed a trilogy. I decided that I would port all three games to get into the Christmas spirit as well as to practice my Create.js skills as I am working on a book on that subject so the more familiar I am the better. More about the books in February.

I had performed a quick port of the game to Create.js using Animate only to discover that the size of the assets when converted to JavaScript were huge. The only asset that I bothered keeping was the snowball. For everything else, I created an image atlas that held everything.  An image atlas is simply just a large image that contains other images, including sprite sheets, with the idea that you only load the single image. When dealing with 3D APIs such as Open GL, this is extremely desirable as switching textures is very costly so having everything on a single texture can greatly boost speed. The key thing to remember when creating an image atlas is that you need to leave space between images otherwise you may end up with fringes. I did not remember this so there are some minor fringing issues with Santa’s Snowball but nothing overly concerning.

The sprite sheet support in Create.js is great with the ability to specify complex animation sequences. The class even has support for spreading your images across multiple image atlases which could be an effective way of organizing complex character animation but not something I had to worry about.

Having just a single image that multiple sprite sheets would access meant that I could preload the image. As I have a bunch of sounds to load as well, the preload.js library would be useful. Sound.js is the Create.js sound library and while sounds do not need to be preloaded, if they have not been preloaded they will not play the first time they are called. The preload.js is the preloading library and simply takes a manifest array to determine what gets preloaded. Events are generated for things such as loading a file, problems with a file, and completing the manifest. I simply captured events for the completion of the loading and for errors. It is important to point out that a file that can not be loaded does not block completing the preloading so if you have files that you require then make sure to detect errors in loading. Here is my code for setting up the preloading.

function init() {
// setup preload.js with necessary drivers
preload = new createjs.LoadQueue(true);
createjs.Sound.registerPlugins([createjs.HTMLAudioPlugin]);
preload.installPlugin(createjs.Sound);
// preload.addEventListener("fileload", handleFileLoaded);
preload.addEventListener("error", handleQueueError);
preload.addEventListener("complete", handleQueueComplete);
preload.loadManifest(
[
{src:"images/SantaSnowball_atlas.png", id:"SantaSnowball_atlas"},
{src:"sounds/SnowSplat1wav.mp3", id:"SnowSplat"},
{src:"sounds/WishTest.mp3", id:"WishTest"},
{src:"sounds/woosh1wav.mp3", id:"woosh"}
]
);

}

Notice that there is an additional step that I did not mention in the prior paragraph. Not registering a plugin can result in pounding your head against a wall trying to figure out why sounds won’t play. A plugin needs to be registered for sound. Other plugins can be installed for different types of data so it is conceivable that you could create your own datatype and write a plugin that handles it. A good example of where this would be handy is for a game that uses maps or some other type of level data.  
Another thing that could be done when creating image atlases is break things into two types of atlases. Sprites, which have transparent areas, can be placed into png files while solid backdrops can be placed into jpeg files which would allow for much better compression. This is not something I did with Santa’s Snowball, but will be a consideration for future projects that I work on.


Next week we will take a look at how I assembled the seven scenes that make up the game.

Wednesday, November 22, 2017

Why Write an Assembler?

With the disassembler coming together so easy and a nice table of op codes and their associated mnemonics, it seemed to me that I had everything to write an assembler as well. This was not part of my original plans for an emulator but the idea of having an emulator that was able to take source code and let you edit it while running the program would be nice. When I implement the interpreter I am going to need to write a lot of small chunks of assembly language and step over it to make sure things are working. A built-in assembler would greatly aid this work and would even be very handy for game development.

Looking at the structure for my table, I realized that looking up mnemonics and finding the associated OP Codes would be trivial to implement. A map of mnemonics can be created with each mnemonic having a list of the different variants of the instructions and the associated OP code. If you know the mnemonic and the address mode, the OP code would simply be a traversal of the list. Here is the code for generating the map.


for (inst in m6502.commands) {
    if (mapOfOpCodes.containsKey(inst.OPString))
        mapOfOpCodes[inst.OPString]!!.add(inst)
    else {
        mapOfOpCodes.put(inst.OPString, arrayListOf<M6502Instruction>(inst))
    }
}

Traditional 6502 assembly language is not that difficult to parse. Or at least it does not appear to be. It should be possible to write a simple tokenizer and a simple parser to break down the assembly language and determine which address mode it is. Of course, there are other aspects to assembling such as directives and labels but how hard can they be? As it turns out a bit harder than I expected but not really that hard.

While the assembler was not in my original design goals, with the above code coming into my head while I was still getting the disassembler written, I managed to convince myself that this would be an interesting path to explore. As I write this I have the assembler functioning and a partially implemented solution for labels so it is well on it’s way. Development on it is now slow as I am using most of my spare time to finish my Santa trilogy. The first of the three games will be posted this Friday since even in Canada Black Friday is when the Christmas season begins. Or at least the shopping part.

Next week will be a look at the game which was developed without Adobe Animate CC, though still using the Create.js libraries. If you don’t have money to spend on Creative Cloud but have Action Script or Flash experience then this is actually a viable path. Even without Flash experience, the Create.js library is a good library to use for JavaScript games. More on that next week.

Wednesday, November 15, 2017

Disassembling the Disassembler

Writing the disassembler turned out to be even simpler than I expected. I had expected the work to be a bit on the time-consuming part as no matter which route I went with to write this I would need to deal with in 56 different instruction with many of them supporting several address modes. There are various approaches that can be taken for disassembling instructions.  For processor architectures such as the Sparc, there are very specific bit patterns that make up the instructions. A look over the instructions clearly shows that this is probably true of the 6502 but with 56 valid instructions and only 256 possible values a simple table approach seemed to be the way to go.

The table approach sets up all the information as a table. By having a function pointer or lambda function in the table, you could also set it up to be able to do the interpretation as well. This isn’t really that inefficient either as it is a simple table lookup which then calls a function that does the interpretation work. The bit approach would be a lot messier and with so few possible outcomes it is not overly cumbersome to create. A more complex processor would be a different story but for this project I will go with the table. Here is the format of the table:

OP Code
The number assigned to this operation. While not technically needed here, it is a good idea to have to make sure the table is complete and it will be needed if an assembler is desired in the future.
Op String
The mnemonic or 3 letter word used to describe the instruction.
Size
How many bytes (1 to 3) the instruction uses.
Address Mode
How memory is addressed.
Cycles
The base number of cycles for the instruction. Things such as crossing page boundaries or whether a branch is taken will add to this value.
Command
The code that handles the interpretation of this instruction.

Disassembling then becomes simply the matter of looking up the instruction then based on the address mode printing out the value or address that it is working with. There are 14 address modes that I came up with as follows:

enum class AddressMode {ABSOLUTE, ABSOLUTE_X, ABSOLUTE_Y, ACCUMULATOR, FUTURE_EXPANSION, IMMEDIATE, IMPLIED, INDIRECT, INDIRECT_X, INDIRECT_Y, RELATIVE, ZERO_PAGE, ZERO_PAGE_X, ZERO_PAGE_Y}

The meaning of the individual values in the enumeration are outlined in the following table. This will become important when the interpretor portion of our emulator starts getting implemented.
ABSOLUTE
Specifies the address that will be accessed directly.
ABSOLUTE_X
The address specified with an offset of the value in the X register.
ABSOLUTE_Y
The address specified with an offset of the value in the Y register.
ACCUMULATOR
The value in the Accumulator is used for the value.
FUTURE_EXPANSION
Unknown address mode as instruction not official. For the instructions that I end up having to implement, this will be changed as necessary.
IMMEDIATE
The value to be used is the next byte.
IMPLIED
The instruction tells you what register(s) it uses and those are what get used.
INDIRECT
Use the address located in the address this points to. So if this was JMP (1234) then the value at 1234 and 1235 would be the address to jump to.
INDIRECT_X
The next byte is a zero page address. The X register is added to this value. That byte and the one following it are then used to form the address to jump to.
INDIRECT_Y
The next byte is a zero page address. It is the low byte and the following zero page byte is the high byte to form the address. The value in the Y register is then added to this address.
RELATIVE
An offset to jump to (relative to the next instruction) if the branch is taken.
ZERO_PAGE
Use a zero page address (0 to 255 so only one byte is needed).
ZERO_PAGE_X
Zero page address with the value of the X register added to it.
ZERO_PAGE_Y
Zero page address with the value of the Y register added to it.

Calculating the addresses is easy but for people use to big endian architectures may be strange. For addresses the first byte is the low order byte followed by the high order byte. This means that the address is first + 256 * second. For branching (relative) the address is the start of the next instruction plus the value passed (-128 to 127).

Next week will be a look at my assembler decision with some hindsight about the process as I am nearly finished the assembler. 

Wednesday, November 8, 2017

Test Driven Disassembly

When I first started programming, the procedure was simple. You would write the program and then you would test the program. The testing was generally manual testing simply making sure that the program would do what you wanted. This is fine for when working on small or personal projects, but when projects get larger this is not a good way of doing things. Changing things could cause code to break but as it is not tested for it can go unnoticed for a long time and when discovered would require a lot of effort to find and fix.

The idea of automated testing helps solve this problem by making the testing process easy as one just needs to run the test after making changes to see if anything is broken. This does require that the tests exist which can be a problem as writing tests after the code has been completed makes writing the test a chore that can be skipped if one is behind schedule. It also has the problem of the test only testing what is already known to work.

Test driven development moves the testing to the top of the development loop. This has the advantage that the tests are written before the code so code always has test code. You then make sure that the tests fail and then write the code and get the code to pass the tests. You also have the advantage of thinking about how exactly you are going to test things and may uncover issues before you have even started writing the code. A comparison of the three methods is shown in the flowcharts below.



As with pretty much every approach to programming, dogmatism can take over and the advantages of test driven development can quickly be replaced by useless burdens. If you find yourself having to write tests for basic getters and setters then you have fallen into the dogmatism rabbit hole. I have been taking a middle ground with my work coming up with tests before writing code. As some things are simply too difficult to write automated tests for, especially non-deterministic programs such as with many games, manual testing is an option as long as you have a clear test plan. Automated testing is a preference as the tests are always ran so problems are detected earlier.

For my disassembler, the test is simply being able to disassemble known code into the proper instructions. My original plan for the assembly code was to write some test assembly language that would cover all the instructions with the various address modes for the instructions. The code wouldn’t have to perform anything useful, just cover the broad range of assembly instructions. This got me thinking about non-standard instructions.

There are future use operation codes (OP codes) that the 6502 has that when used will do things. As this functionality is unofficial, using such instructions is not wise since the presence is not guaranteed, but some programmers would use these instructions if it would save memory or cycles. As I do want my emulator to work with at least some real cartridges, which may use unofficial instructions, I need my disassembler to be able to detect these instructions and alert me to the use of the instructions so I can figure out what the instruction does and implement it in the future.

This means that all 256 possible OP codes need to be tested. As I want to be able to disassemble from any arbitrary point in memory, this simply meant that my test could be procedurally done. My test memory simply filled memory with the numbers from 0 to 255 so if I set the disassembly address in sequence, I would have all the instructions with the subsequent bytes being the address or value to be used. The fact that the instructions were of different lengths was not a big deal as we would be manually controlling the disassembly address. The list of instructions is something that I have so creating the test result list to compare to was very simple.

When running the test, if there is an error, it is still possible that my test list is incorrect, but this is obvious enough to determine. Once the disassembler is in a state that it can disassemble the complete list, it is probably working so as far as tests are concerned, this is a good way of testing. Once I wrote my disassembler, I did have to fix the list but also did find some issues so overall the test did work. Next week I will go into the disassembler which was surprisingly easy to write.

Wednesday, November 1, 2017

Halloween Scratch Postmortem

I have made huge progress on the emulator finishing the preliminary disassembly and starting work on an assembler so the next few months’ worth of blog posts will be catching up with where I am at with the project. The assembler was not part of my original plans but after getting the disassembler working it didn’t seem like it would be hard. Turns out to be a bit more complex than I expected but still worth doing. As soon as I get the assembler to a stable state (hopefully by next Wednesday’s post) I will post the code that I have written. Haven’t decided were the code will be hosted yet. Since I am so far ahead, I will spend a bit of time on my Christmas trilogy so may be posting three games over the next couple of months. But this week is the postmortem of my Halloween game.

While Halloween Scratch was not originally going to be the Halloween port for this year, while porting the vector assets of most of my Flash games into Create.js compatible JavaScript, it became apparent that some games benefit from developing them in Adobe Animate CC (hereafter referred to as Animate) while others only benefit from having the vector graphics converted into Create.js code. Animate is not the cheapest of software, so with my license running out this month I decided that I would not be renewing it for a while as I don’t really need it. Halloween Scratch was a game that was very animation oriented and was a simple game, finishing it in Animate while I still had the software made sense.


Halloween Scratch is a lottery ticket where instead of winning a prize, you win a monster.  There were other lottery ticket games at the time that were just click to reveal and I wanted to demonstrate to a potential client that you could have a much closer feel to scratching a real lottery ticket.

What Went Right

The animations worked pretty much flawlessly so very little work had to be done there other than with the alien as the transport effect was using color filters. Color filters in Animate are costly so they are cached which means you need to either force the image to be re-cached or come up with some other way of doing it. Simply creating images in the colored states (create a copy of the image then apply the color effect) was all that was needed.  If your game revolves more around animation effects, then using Animate is helpful. Most of my games are more code oriented so I am not sure it is worth it.

What Went Wrong

I had some strange issues with children in a container. I am not sure if this behavior stemmed from Animate, from Create.js, or from JavaScript itself. What was happening was that I used the target of the event listener to hide the dots that make up the scratch cover. There was a reset cover method that should have reset all the children to visible but even though it thought it was doing so, nothing was happening on the screen so already scratched off areas remained invisible. I am not sure why this was happening but was able to get the display list to properly reflect the visibility of a dot by accessing the dot through the dots array instead of the event target. It should not matter which reference of the object has its visibility changed yet in this case it does. I suspect this is one of the “this” pointer related issues that I always run across in JavaScript.

Mixed Blessings

I think the original game did an excellent job of representing the feel of a lotto ticket. Unfortunately, this was originally an ActionScript 1 game so the scratch code had to be pretty much re-written from scratch. I had hoped that using roll-over events would be detectible on tablets allowing for tablet users to scratch the ticket. This was not the case with the browser window being scrolled around instead. To solve this I added click support so by touching a tile it would reveal a block of the image. Not the best solution but it does allow browser users to play the game. Interesting side effect is that the tile can only be clicked if it has not been removed yet so computer users are pretty much forced to scratch the ticket.

Overall,  Animate is a good tool for porting animations and vector artwork to JavaScript but once that is done, the coding work is easier to do in other tools making it more of a utility than a tool. Animate is still a really good tool for creating animations so it would not surprise me if I end up renting the tool in the future but for my porting work, I am pretty much finished with the tool. Create.js is a workable solution for porting Flash games to HTML5 but ultimately you are working with JavaScript which is not the greatest of languages to work with.

Wednesday, October 25, 2017

Thanks for the Memories

When I program, I try to follow the following pattern: get it working, get it working correctly, then if necessary get it running fast. Premature optimization is one of the bigger problems that programmers face. Often optimized code is hard to read and is the ideal spot for bugs to lurk. Making premature optimization even a worse habit is far too often you end up spending time optimizing the wrong code (not the thing that is actually causing the program to run slow) or are optimizing code that you are going to be replacing later. This is why optimizing after you have finished something makes the most sense.

After writing my C++ memory system for my emulator project, I realized that I really didn’t like the code. Several professors that I have had would call this code smell. The thing is, I really didn’t know why I didn’t like the code, just that it didn’t feel right. The subconscious is really good at determining when something isn’t right but feeds this information to the conscious mind in the form of vague feelings. I have learned that it is best to try and listen to these feelings and work out what your subconscious is trying to tell you.

My initial thoughts on the problem were that the code was not going to be efficient. With an emulator this could be a big concern as poor performance on memory operations would reduce the overall performance of the emulator. This lead me to thinking about other ways I could handle the memory and realized that I was prematurely optimizing the problem. This, however, may be a case where that is a good thing. The memory subsystem will be used by everything in the emulator so making sure the interface is locked down is important.

The big issue with the 2600 memory management is that I need to track reads and writes. If I only had to track writing, then memory could be a fast global array with only writes needing to be handled through a function. This got me researching the various 2600 bank switching schemes to verify if any need to handle switching on a read. The most common bank switching scheme does the switch on a LDA instruction so that approach will not work. As tools for refactoring code have improved immensely making such drastic changes to the code later may not be that big of a deal, so I decided to leave things alone and port the existing code to Kotlin.

While re-writing the code in Kotlin, I realized that I may be over-complicating things. In C++, the cartridge loader class would be passed to the A2600 class (the machine) which would then call the cartridge loader install code which would tell the A2600 which memory manager to use. The A2600 - specifically the TIA emulator and the 6502 emulator - would access memory by calling the MMU class and if the code resulted in a bank switch then the MMU would call the cartridge loader to adjust the banks. By having the memory accesses go through the cartridge and having the MMU built into the cartridge (it could still be a separate class but don’t think that is necessary at this point) things are much easier as this picture shows. This change should make any future optimization easier alieving me of most my conserns.



While I am now starting my disassembler, or at least writing the test for the disassembler, next week will be a postmortem of my Halloween game which will be released this weekend. It is a port of a really old “game” that I did  and even though it is pretty low on the polling results it is a cute game and would be easier to port now than later (more on why next week).

Wednesday, October 18, 2017

The Kotlin Decision

Just like there are a number of JavaScript replacement languages, Kotlin is a Java replacement language which produces JRE bytecode which is compatible with the Java language. What got Kotlin on the roadmap was Google announcing full support for the language on the Android platform. The Android course I took in university was Java based which is okay. I am one of the few(?) programmers out there that don’t mind Java but do wish it was less verbose, compiled to JavaScript (or preferably asm.js), and could be compiled to native code. These are the things that Kotlin does, along with better type safety. This sounds to me like an ideal language and with it becoming increasingly popular among Android programmers it may take off meaning future work potential.

What I do when I want to learn a new language is skim through some of the books on the language get an idea about the basic syntax of the language and then find a small but real project to attempt to do in that language. Having a real project for a language really tells you a lot more about a language than books will and lets you know if a language is one that you can tolerate using or if it one of those languages that you will only use if you are being paid to use it, such as COBOL. I have in the past had the problem of having way too many projects going on at the same time. I still have a bad habit for this but am going to try and keep my personal projects down to two which at the moment are my re-write of Flash Game Development and my emulator/Coffee Quest 2600 project that I am developing on this blog. This means that if I want to learn Kotlan, I either have to wait until the book is finished or switch the language that I am using for developing my emulator in.

As there is only a hundred lines of code written for the project, now would be the ideal time to switch languages. It would also let me develop code for the web (JavaScript) while also working in the JRE and on Android devices. The problem is that part of the reason I decided to go with the emulator project was to get my C++ skills back up to a useful level. There is a third option, being to develop the emulator in both C++ and Kotlin and see how well the languages compare for such a task. As C++ is a system level language it should win hands-down but if native Kotlin is comparable in performance then that may speak to the future direction for my personal projects.

So, tonight I am setting up my Kotlin development environment and porting my existing code over to Kotlin. I will then start working on the disassembler portion of  the project in Kotlin. I have a really interesting way of doing the disassembly that will also be very helpful when I get around to writing the 6502 interpreter. Once I have finished the disassembler portion I will then port the code to C++ and then make an assessment as to if I want to continue developing in two languages or pick which language I wish to stick with.

So my decision is not to make a decision. This is probably a good decision as it will give me exposure to the Kotlin language so even if I ultimately drop it for C++ I will know whether it is an appropriate language for projects such as Coffee Quest. My Coffee Quest port, a future project, was going to be a port of the Java code into a language that could be compiled into JavaScript so it can run on the web as well as stand alone. I had been thinking of porting to C++ then using emscripten to generate asm.js code but if Kotlin works out then the Kotlin route may be the easier approach. Worst case I waste a bit of time prototyping my emulator in a different language but as this is a hobby project anyway that is not much of a loss.

Wednesday, October 11, 2017

Friday the 13th


I do not like the code that I wrote and am also considering switching this project to Kotlin as my test project for that language so am going to hold of discussing it this week and instead make an announcement. On Friday the 13th I will be updating my Spelchan.com site to have a less-ugly look. I will also be posting the first ported Blazing Games title, which will be 132 spikes.

Why that day? After all, doesn’t a horror movie franchise claim that this is an unlucky day? Well, porting games is drudge work and I consider it a horror so what better day than that. I am using Adobe Animate CC for the porting but am thinking that it really is not worth the price so will probably be switching to using direct Create.js code once I finish the games I am porting for the upcoming revision of my Flash Game Development book.

The first few games that I have ported went smoothly being easier to port than I feared yet nowhere near as easy to port as I had hoped. Animate CC converted the vector graphics and tween animations to create.js code for me, but none of the ActionScript was ported over requiring me to re-write the code myself in JavaScript. This is not overly hard as most of the Flash API has equivalent Create.js calls, but remembering to put this in front of all class scoped variables was a bit of a pain and often was the cause of any errors in the game. The speed of the games isn’t that great but I am waiting for the WebGL version of Create.js to be officially released before I start playing with that.

Some readers may have noticed that said I have finished porting several games, not just 132 spikes. My plans are to post one port a month for sure and additional posts of games when it is the appropriate occasion, (so yes, there will be a Halloween game). On Wednesdays where I have not made significant progress on my emulator, or at least don’t have new emulator topics to discuss, I will have a short progress report and apologize for my slow development time by posting another game that I ported.

My current porting plans are to finish the 10 games from my Flash book and then look at the poll results on BlazingGames.com to see what bigger series (right now One of those Weeks or Coffee Quest) is more desired then go with one of those larger projects doing smaller holiday games as appropriate. This plan is not set in stone so could change based on factors outside of my control.

Next week I will either be explaining my switch to Kotlin decision or reviewing why I think that my memory emulation code sucks. I do want to play around with emscripten but Kotlin does look like a real interesting language and may actually be a good language for tablet and web development, with work being done on a native compiler to boot. Tough decision ahead for me. See you next week.

Wednesday, October 4, 2017

Emulating Memory

The 2600 emulator is going to need memory. As it is based on the 6502 processor, we know that at most it has 16 bits worth of memory which is very little for modern processors. The obvious solution then is the following code:

unsigned char memory[65536];

Sadly, it is not going to be that easy. This is for several reasons. First, the 2600 used a cheaper version of the 6502 which only had 13 address lines not 16 so only 8192 bytes were addressable. Making this even worse is the fact that only 12 of these were for the cartridge, so cartridges were limited to 4096. Some of you are probably thinking, “Wait a second, Bill, weren’t there 8K, 12K, and 16K cartridges for the 2600?” Yes, there were. This leads to the real problem with the above approach.

Because of the memory restrictions of the cartridges, developers who needed more memory for increasingly complex games had to come up with ways around this. The solution that was used was several different bank-switching types of cartridges. The idea here is that a chip on the cartridge would act as a go-between giving the console different memory based on certain actions by the program. The memory management unit could detect things like reading or writing to a certain address and would use this to determine which bank of memory would be in place. This means that we are going to need to be able to know when memory is read or written to so we can handle such situations.

The next issue is the fact that cartridges were ROM. You cannot write to ROM.

As is common with common with computers, some memory is often mapped out to hardware devices so you communicate with this hardware by reading and especially writing to memory within a certain address range. This is how you set up the TIA chip for displaying things on the screen. There are a few other IO operations that also work this way (though I believe they are handled by different chips).

So emulating memory is not going to be that easy. My initial plan is to break up memory into a cartridge loader and a MMU class that can be overridden to support several types of bank-switching schemes. The cartridge loader would be responsible for determining which MMU to use and then giving the MMU the ROM data. There could be different cartridge loaders as well, and I plan on having two. The first would be a very basic file-based loader that would be used during development and would simply load the ROM file from the hard drive. Emscripten, the tool that I am using to compile C++ into asm.js, does let you load files but does so using a virtual file system. This is a bit of a pain so another cartridge loader which would be more web-friendly and be designed to so that I don’t need to deal with virtual file systems to change cartridges on a web page.


This project is being developed relatively live. This means that I am writing this as I work on the project. I am hoping I can get a few articles ahead of what I am posting but do want to keep what I post on this blog accurate to what I am going through as I develop this project so that readers can learn from both my successes and my failures. Tonight I am going to start coding and hopefully next post we will finally start seeing some of the code.

Wednesday, September 27, 2017

Why asm.js?

Developing a JavaScript application in C/C++ may seem strange, but there are many languages that do compile, or transpile if you wish, into JavaScript. A large reason for this is simply the fact that JavaScript is a scripting language and was not really designed for large applications so simply does not have some of the language constructs that aid in the creation of larger projects. The later version of ECMAScript (the standard that JavaScript is based on) are attempting to solve that problem, with TypeScript being an example of what the future of JavaScript may be like. The problem, however, is that there are people using old browsers so may not have access to the latest language features. Therefore, tools such as TypeScript are useful as they take new language constructs and compile them into older JavaScript code making the code useful to a broader range of users.

Developing in C or C++ is quite a bit different from using a language like TypeScript that was designed to compile into JavaScript. The reason a person would do this would most likely be to port existing code to the web. In fact, Unreal and Unity are two examples of such uses of this process. In my case, developing in C++ makes some sense as it would give me a very performant version that would run on computers (and possibly Android, IOS however does not allow emulation). The web version would be compiled into asm.js to give it satisfactory performance on browsers that implement asm.js and would still run on browsers that did not support asm.js.

The last sentence may sound contradictory but only if you don’t know what asm.js is. For those of you not sure what I am talking about, asm.js is a subset of JavaScript that can be compiled into very optimized machine language. Browsers that support the standard will then spot the asm.js code and compile it into native machine language for the best speed. Because the subset is designed for optimal code, the result is much faster execution than what JIT compilation would give you. Browsers that are not aware of asm.js would just see perfectly valid JavaScript which it would be able to run, though this would be at JIT speeds so the results would not be as good.

There are a couple of other ways of compiling C++ for use on a web page. The most obvious alternative is Native Client (NaCl) which is Google’s sandboxing technology for allowing native code to run in a browser. This has the advantage that the code being run is already compiled, but needs to be compiled for the processor that the client is using (so multiple versions would need to be created). The bigger issue is that many browser venders are not implementing this so it largely limited to the Chrome browser.


The other alternative is the relatively new WebAssembly bytecode standard. Instead of using a subset of JavaScript, this uses a bytecode compiled version of JavaScript. This results in much more efficient downloading of the application as the bytecode version is smaller than the code version. I find this a fascinating development as what is effectively happening is a replacement of Java bytecode that is vanishing from browsers and replacing it with a new bytecode. It is receiving support from all the major browser vendors but is a relatively new standard. In the future, I may go with WebAssembly for creating web applications, but as it may be a while before older browsers that can’t run it disappear, going with asm.js in the short term makes the most sense. Once I start getting some asm.js material out, I will see how difficult it would be to support both formats.

Wednesday, September 20, 2017

Emulator Project Starting

I have decided that I am going to attempt to allocate a few hours every Wednesday to work on a homebrew project. While going back to my NES RPG would probably be a popular choice, the 2600Dragons.com project that I did for university has me interested in creating my own emulator. I know that there are many emulators available for pretty much any old system that you can think of so the work here is not really needed. Moreover, the emulators available tend to be pretty well written. Still, creating an emulator from scratch would be a very entertaining project.

My choices for target platforms would obviously be the Atari 2600, the NES, or a custom 8-bit console that I created just for the sake of creating an original emulator. Depending on what happens with my Masters degree, a variant of the third choice may be what ultimately happens but for now I am thinking of writing a JavaScript 2600 emulator. I have already created a rough simulator for the TIA chip, though it does need more work. It is the easiest of the three options to work on, and much of the work here can be translated to the other two ideas if it ever proves successful.

JavaScript is not the best choice of languages for creating an emulator but does have the advantage that it works on the internet. For this reason I am also considering writing the code in C and using an asm.js  compiler. Writing the emulator core library in C and compiling to JavaScript would allow me to use the core library in other C projects if I decided to go that route. I haven't used asm.js yet so this would be an interesting experiment.

The project would then be broken down into getting a memory subsystem for loading cartridges working, getting a disassembler working for disassembling the 6502 code on the cartridge into readable assembly language. Once I can get code disassembling then I can implement the emulation of the processor. Get the TIA chip emulated and add some interface code and I will have a rudimentary emulator. This sounds easy but I suspect the path will be a lot harder than I anticipate.

And yes, I do plan on creating a CoffeeQuest2600 game for the 2600 which would run in my emulator.

Monday, September 11, 2017

When I said quarterly updates...

When I said quarterly updates I meant every quarter of a decade or so. Okay, things got way to hectic, with university only being one huge time sink so I never really had much time to do anything. With that complete, at least for the next year (I am considering going for a Masters degree) I decided to take a look at my older material. With Java already dead on websites and Flash having a 2020 end of life, I have decided to close down Blazing Games. The Flash and Java games that people are interested in will be ported to HTML 5 and moved to my spelchan.com domain (which will be undergoing a major face-lift.

I don't foresee myself doing that much in the way of game jams or home-brew development, though I did post my 2600dragons site which I did as part of my web development course that didn't have a challenge option. It covers how the Atari 2600 works so people interested in how older consoles work may find it interesting.

I am trying to decide if I should post updates about the Blazing Games games that I port here or if I should resurrect my Blazing Games Development Blog even though Blazing Games no longer exists. I am going to try to update either this blog or my Development blog more frequently.