Categories
Software Development Web Development

Why I Will Always Choose Firefox

Unlike most people who may be reading this, when I install a browser, it’s always Firefox. Being a software developer, and working around other tech-literate people, this simple fact seems to draw a fair bit of confusion, as everybody else seems to think of Google Chrome as their go-to option.

I’ve had a great many friends over the years try to convince me to switch to Chrome, usually, I presume, as an instinctive aversion to difference, and once because they wanted to prank me by installing the NCage chrome extension. On that last occasion, I installed it just to see the prank, and then promptly uninstalled it again.

In the past, my reasoning for doing so wasn’t so strong. When I first installed Firefox, it was when I was perhaps in junior high school, and at the time I wasn’t very tech literate. I had basically every browser installed on my old Windows XP system, and any given week I might favor a different one. At one point, I even recall being a staunch Safari for Windows supporter. Perhaps that’s slightly less of a sin than my occasional use of Internet Explorer, though.

It was a few years later, when I was in high school, that I finally made my final decision. I wanted to only use Firefox… because of the cute fox mascot (I am aware that a fire fox is a red panda, but their mascot isn’t, so your point is invalid). No joke, that was the decision that led to where I am today.

Things have changed an awful lot since I made up my mind about my choice of browser, but I have used every major browser there is, and I am still up to date on the technological details and features of most of them. To this day, I feel that I still made the correct choice, albeit for a silly reason at the time.

In the past, I felt happy to be different in my choice, because why not? Everybody else could use whatever they wanted, and so could I. The nature of the internet wasn’t at stake. But now, that has changed. With Microsoft choosing to switch edge to be based off of Chromium, this now means that there are only two remaining players in the browser engine arena: Google’s Chromium, and Mozilla’s Gecko.

Competition Makes Better Browsers

One of the benefits to what may seem like duplicated work is a parallel to one of the main benefits of capitalism in general: competition creates faster innovation. Because every browser team wanted theirs to be the best, they had no choice but to try to innovate faster, create richer features, and do whatever the people wanted as fast as possible, lest they fall out of favor and be replaced by their competitor. No doubt this is, and was, a stressful thing, but that stress undoubtedly yields a superior product from everyone in the game.

If Gecko were to fall, there would be only one browser engine left: Chromium. Do you think they’d need to innovate quickly? Absolutely not. They’d be in no danger of being replaced. Innovation would surely still continue, but not at the rate it has in the past.

Competition Enforces Standards

The web is entirely based on standardization right now. There are several large committees representing hundreds of companies and organizations that govern the standards of the web that we use today. These standards are designed to be written guidelines with very specific rules, which ensure that multiple different implementations produce the same result when given the same web content.

In the past, this has worked very well. Despite the occasional difficulty, browser engines were able to keep up with these standards, and that’s why somebody could create one version of their website for Chrome, Firefox, iOS, Opera, Edge, and so on, and it would work the same everywhere (except Internet Explorer, they could never truly get it right

The question it seems a lot of people might struggle to answer is why every browser cared so much about following standards. Why couldn’t chrome create its own method of drawing on a canvas, or aligning content, for example? Surely doing so would give them the ability to claim that they had a special, extra feature that nobody else did. In a way, it’d be much like Microsoft holding exclusivity on an API like DirectX. Anybody wanting to use that API needed Windows (until Wine happened, of course). The problem, which is on a smaller scale an issue faced by DirectX, is that developers will begin to prefer alternative APIs that support more browsers with the same code, and users of other browsers will begin to see that browser as something of a non-team-player. Every time they go to a website that doesn’t work correctly because of the lack of standardization, they’ll blame both the developer of the site, and the developer of the non-standard browser. Further, if the other browsers are all standardizing, and one isn’t, then they all become the browsers that “Just Work” and the odd one out is… well… the odd one out.

I do believe that the initial adoption of web standards was because browser developers are web developers too, and they likely did it out of a desire to make the web a better place, but years later, standards are no longer optional, and competition is why.

If Gecko dropped out of the competition and left Chromium all alone, would the Chromium team have to follow standards? Absolutely not. I believe they would try, but they’d have all the leverage. If they wrote an API, it’d become a de facto standard, because everybody has it. If they wrote an implementation wrong, it wouldn’t matter, because theirs would be the vast majority of the market share, so the actual standard implementation would actually become “wrong.” This might sound good to some out there, because it would surely streamline the process of the standards, but it puts all of the power in the hands of one project, one team, one company. It’d essentially be giving Google a monopoly on the internet. They wouldn’t necessarily use that for evil, the problem is that they easily could.

The Fight

Maybe I’m too idealistic, but now that Gecko is the last competitor to Chromium, this has become a real fight to me. I won’t switch to Chrome because I like Firefox more, but also because I believe in the open, competitive web, and that can only change over my cold, dead Gecko-based browser. And I hope some of you will join me in that fight.

Categories
Game Development Game Jams Software Development

Ludum Dare 41 Postmortem

Ludum Dare 41 came to an end just a little more than a week ago. I believe I’ve finally recovered from the mayhem and come through the better in the end.

Project Status

Unfortunately, my Ludum Dare 41 project, a “turn-based AI battle-royale game” called “Turn-by-Turn Deathmatch” was not finished when the 72-hour deadline hit. This was, unfortunately, my expectation for this one. I may continue working on the game in the future, but I am particularly averse to the theme and not satisfied with the idea that I was able to come up with.

The Theme

My struggle for this Ludum Dare originated at the outset of the theme announcement. The theme for 41 was “combine two incompatible genres.” I know a lot of people had differing opinions on this, but it is mine that it is the worst theme yet. For those of us that are relatively new to the game-design scene, the theme is asking us to throw out any convention of design that we may have relied upon to make a game that is not a complete flop. For those more experienced, I will grant that it does provide a unique challenge, and for that reason I was almost somewhat conflicted. That said, it is my opinion that Ludum Dare should be welcoming to new developers, as Ludum Dare is often some of the first exposures to the indie game dev scene that they will get. It should be a good impression. I believe this theme, by significantly raising the difficulty of creating a workable concept, likely created a bad first experience for them.

I created a very large, very time-costly spreadsheet of an incomplete list of game genres in an adjacency-martrix-like spreadsheet, which ranked the combination of any two genres as “good,” “neutral,” or “bad” based on how well it fit the theme, which is to say that a “good” combination was two genres that one would not expect to see combined. Even with this chart, I found myself struggling to come up with a theme, because as it turned out, many of the “good” combinations sounded like awful games, or they required some complexity that I knew would take longer than the deadline. For example, I really wanted to do a “first-person platformer,” which would be a 3D game where you basically see something like Super Mario from Mario’s eyes. It’s been done before, and I recall positive reception. Unfortunately, my 3D-focused game engine did not have a completed rendering engine, and with 3D being somewhat new territory for me, I scrapped that idea. Eventually I came up with combining “turn-based” and “shooter” and then eventually I realized that “shooter” could be replaced with “battle-royale” if the enemies are roughly equally equipped and trying to kill each other as well. I thought it was a neat concept, but I also knew that I had minimal experience with AI, and no experience with turn-based games. I was also looking at just over 24 hours left on the deadline when I had the theme nailed down, so I knew it would be a big challenge.

The Technical Struggle

I wanted to go into this jam using my custom 2D game engine, ArcticWolf. I saw this as an opportunity to, as I did last time, use the game jam as a sort of excuse and motivation to further my progress on the mundane core subsystems of my game engines. I had intended to work out a simple painterly blitting rendering engine, because I thought it would be the most straightforward design where the least issues could crop up. What I did not realize, however, was that rendering things in the correct order being mandatory in a painterly renderer, would actually prove to be a nightmare of a challenge.

In C++ there is no STL container that allows indexed access and automatically organizes them based on a comparison. There is a container called std::priority_queue, but that will only provide access to the front element in the queue. This could work if we rebuilt the entire rendering queue every frame, but I knew that would be a huge detriment to the performance of the game, since elements would not be frequently (probably never) changing their z-indices, and therefore what I really wanted was a std::priority_vector.

Well, I hate to break this to you all, but no such thing as std::priority_vector exists. So I made one. In the ArcticWolf code, there is a class called aw::PriorityVector, which does everything that I needed. It took a lot of work, but it is one of the pieces of my creation during this game jam that I am the most proud of. It’s very simple, and all it does is store elements into the vector in the correct order based on a comparison type, the same that you might pass to std::sort. If the attributes defining the comparable values of the contained objects change, there’s a method that allows reordering the entire container, which is expensive, but useful if z-indices change. Another couple of useful helper classes were created to be used with this: aw::PointerGreater and aw::PointerLess. They are equivalent to std::greater and std::less, but they dereference each element being compared. This was necessary because the aw::Renderable class was polymorphic and referenced via pointer only, which meant the usual comparisons would have actually compared their addresses, not their values.

The main technical struggle, however, was in the complexity of my ambitious rendering engine. I decided it to render a scene which contains polymorphic layer objects, each of which can contain a polymorphic aw::Renderer and a bunch of objects that contain aw::Renderables, getting passed as pointers back to the aw::Renderer when the layer is rendered. With this system, I could have a base that is a tile map, representing the game world (which I also intended to allow multiple layers with different parallax values to make it a 2.5D engine if I wanted), and then a sprite layer on top of that into which the player and other entities would be placed, and then a UI layer on top of that which uses a rendering pattern similar to HTML5 DOM.

I spent a lot of time on this rendering code, and I am glad to say that I made quite a bit of progress in the process, but it also means that I didn’t have enough time as the idea that I had would have required. I was able to get the preloading and menu completed, but not much beyond that. I actually ended up trying to temporarily bail on the new rendering engine and write raw rendering code within the last 12 hours of the jam as a last-ditch effort to get a workable game by the deadline, but by that point I had not slept in a long while and I didn’t have the focus to keep myself awake.

At some point, I decided that this jam and its very undesirable theme was not worth the suffering. I remember waking up with my face down on my laptop, completely disoriented, with the morning sun shining brightly on me, and at that point I decided to close the project on the ldjam website and go to sleep. I woke up at about 11PM that evening. So much for my sleep schedule.

Future Plans

I was quite disappointed in the theme for this Ludum Dare, and I can’t help but wonder if the reason we’ve started getting worse and worse themes over the last few years might be directly correlating to the improvements being made to the ldjam website and making voting easier. They received well in excess of three thousand votes for the theme, but based on past trends, I think we can expect approximately a third of those people to submit games. Thus, it seems likely that many of the people voting on the theme have little to no intention of making a game for it, and therefore they have no interest in making sure the theme is workable, reasonable, and doesn’t cut out new developers.

My suggestion of improvement would be to allow theme voting only for users with submitted, completed, and rated jam or compo submissions to vote. In my opinion this would drastically improve the themes. That said, I don’t think that is likely to happen. Given that, next Ludum Dare, I may or may not participate depending on the theme. If the trend of worsening themes continues, I will not be participating in Ludum Dare 42. Nonetheless, I do intend to do a game jam in August. That is not a question, what is a question is whether or not I will be following the chosen theme or the standard deadline. I’ve been toying with the idea of running my own week-long jam with a theme chosen by either myself or by taking one of the losing themes from a previous Ludum Dare. Obviously I’d lose out on some of the community features of Ludum Dare, but it would still provide more or less the same training effect, and I think in the course of a full week, I’d be able to create a more-than-trivial game design.

Categories
Programming Languages Software Development Web Development

JavaScript: Love, Hate, and Standardization

Recently, I’ve read a lot of hate toward JavaScript (which is easy to explain, see my previous post titled “Programming Language Wars“), and specifically there are a number of people who would request that browser vendors and the web standards organizations implement additional/alternative programming languages. This is a very naive request that I believe stems from lack of respect for standardization and the challenges facing browser development.

First of all, “JavaScript”, AKA ECMAScript AKA ECMA-262 AKA ISO/IEC 16262, exists primarily as a standardization, and that’s the important part here. Despite its origins, JavaScript has been around for decades, and likely will persist as such for decades, maybe even centuries to come. It was selected to be the core of the programming side of the standardized web. This standardization has been of particular benefit to making the web more or less completely compatible between a variety of browsers. JavaScript is defined in clearly written specifications, and as a result, each implementation, from Firefox’s Spidermonkey, to Chrome’s V8, to the dozens of other complete implementations, can be implemented separately and therefore be mutually independent from one another, while at the same time each supporting the same code so long as the developers making these products follow the standards correctly. This also affords freedom for competition to create distinct implementations instead of relying on a centralized one, which cultivates innovation.

There are many different languages that have a similar standardization and specification process, but this is not the only important characteristic of JavaScript. As a language designed for the web, it cannot make any presumptions of the architecture on which it will run. This means it would need to be provided as a scripting language (which JavaScript is), or a language of virtual machine bytecode (which WebAssembly is, which I’ll get into later). In addition to this, the language is supposed to be entirely forward compatible, so that code written a long time ago is capable of running on newer engines without modification. On some languages, this requirement can be, and is, safely ignored. Allowing things to be routinely deprecated and changed breaks old code. Often in these cases, the solution is simply to run the code on an outdated interpreter. This is not an option for browser vendors, however, whose releases already push several hundred megabytes in size, to say nothing of the nightmare of deciding which versions are still to be shipped, or the support nightmare of requiring end-users to manually choose their interpreter versions. These characteristics rule out a number of other possible languages.

Yet another issue with implementing other languages is the prevalence of JavaScript. Whether or not you like it, which is really not of concern here, JavaScript is arguably the most popular programming language in the world right now. Its forgiving nature cultivates ease of use for the beginner, largely due to being untyped and having automatic garbage collection. Additionally, the massive amount of development done due to the importance of the web has made JavaScript very performant. It also has a very C-like syntax, which is very popular and familiar to many developers.

Some people are hasty to criticize a language or library for catering to the needs of the less-experienced. These people usually complain about the loss of quality by making programming more accessible. But as is usually the case, ease of use does not preclude advanced functionality. JavaScript is a very mature language, and as such, it has a large user base that demands access to more advanced features and the ability to optimize performance more than an entry-level user requires.

And finally, and in my opinion, the biggest reason against implementing new languages for the browser is maintainability. As I write this, the Chromium JavaScript implementation, V8, has 1350 open bug reports. This is the number for a single programming language. If the web standards were to mandate the addition of another language, this would have to be implemented in yet another system of roughly equal complexity, and this would cripple the progress of development teams that even now are hard at work steadily advancing the single language implementation they have to maintain already.

All of these arguments aside, however, I fully understand the desire for developers to work in a language that suits their own style. I know the struggle of being forced to work with a programming language that I don’t like. So if I believe that, why do I still deny that other languages should be implemented in the web standards? First of all, I believe the benefit is greatly outweighed by the detrimental effect it would have on the speed of development for the open web standards. Second of all, if new languages are implemented, sooner or later, even those languages will come under fire for essentially the same reasons.

What we need isn’t to explicitly modify those standards to include languages, but to provide a common foundation on which new languages can be founded without requiring modification to the central standard itself. The simple answer to this is an intermediate language or virtual machine.

So far, JavaScript itself has acted as this intermediate language. Languages such as TypeScript, CoffeeScript, and Dart transpile into simplified JavaScript with boilerplate to implement missing features. Emscripten has even allowed the transpiling of languages such as C or C++ into JavaScript.

More recently, however, there is a new standard known as WebAssembly, which I believe solves this problem even better. One of the main purposes of WebAssembly was to provide an avenue for higher performance computing in a browser environment. One of the lowest levels of abstraction of an execution environment is assembly language. It encodes the primitive computational operations of the hardware itself, and has very few, if any, high level features. This lower level of abstraction also means that instructions written in it have little to no overhead. Due to the requirement for web languages to be architecture-neutral and to have some level of access control security, this had to be implemented in a virtual machine architecture that is conceptually close to a CPU architecture, but with a universal specification, and the ability to restrict the execution context to resources that it should have access to. This design is remarkably similar to the design of the Java virtual machine.

In the future, I suspect that many more programming languages will have either integrated or third-party backends that allow software to be compiled to run in this environment. This will require a fair amount of work to get this completed for each language, but I think it is the best way forward. With this method, any language could become a web-compatible language in the future, without needing to be integrated into the web standards, and without requiring extra work by the browser developers. By funneling all languages into the same intermediate language, it also ensures compatibility between them. And finally, because this method requires a compilation step, it means that if the programming language is updated and breaking changes implemented, previously compiled code will continue to function as normal until recompiled. The breaking changes may therefore still impact the developer, but not the end-user.

So to sum up: NO, new languages should not be integrated into web standards, but YES, developers of other languages should utilize transpiling and WebAssembly to make their languages available for web development. Other web-friendly programming languages should not be competing with JavaScript, they should rather see JavaScript and WebAssembly as their natural choice for an intermediate compilation target.

Categories
Software Architecture Software Development Web Development

Software Architecture and the YAGNI principle

I consider myself a developer with a good appreciation for a well-designed and carefully implemented architecture. I like to do things the “right” way whenever possible, instead of the quick way. I try my best to make everything I write modular, with minimal dependencies, and adaptable to any platform. Over the years I’ve been programming, however, I’ve learned that while that is a noble goal, there are certain occasions where the effort to conform to the aforementioned goal is not actually worth it. The key driving principle behind what I am referring to is the “YAGNI principle.”

YAGNI, meaning “You Aren’t Gonna Need It,” is the idea that you should never add a feature to your code base that you aren’t certain that you are going to need soon, or even eventually. I think many software developers who are relatively new to the craft, and who begin to learn about all of the architectural design patterns that can be used to keep your code base from turning into something that keeps you awake at night, sweating with fear, there is a tendency to begin developing a fear of cutting any corners ever. There is a general sense that making things more “smart” is always worth the effort, regardless of the cost. But YAGNI says that is not so. This was a principle that I unfortunately had to learn the hard way about 3 years ago.

I was working on an (unfortunately closed source) web-based asset management system software for my previous employer. Believe it or not, this was the first piece of professional software that I worked on (as in, the first that I was getting paid to make). I started work on it in mid 2014, and at that time I was making it with a fairly difficult-to-maintain procedural PHP code base, with no libraries or frameworks whatsoever. No functionality was being delegated to JavaScript, it was just going to be PHP doing all the work and sending a big chunk of HTML. In other words, it was an atrocious mountain of spaghetti at that point.

During that year, I worked on a lot of other projects, and began to learn quite a bit more about the PHP language, and above all, I decided that I was high time I learned how to use Object Oriented Programming to make my code a lot more maintainable, “DRY” if you will. So I dug into some tutorials and messed around with it, and got a general sense of how it worked. I also started learning about basic template systems, AJAX, and RESTful principles.

Some time in 2015, I decided to take all of this new-found knowledge back to the old asset management system code base, and try to apply it everywhere I could. And so I spent a large amount of time modernizing the code base and getting it up to par with what I thought was excellent design. And most of the ideas I had stuck around through the remaining years, more or less, so I know I wasn’t completely misguided.

There was one idea that I had, however, that was the biggest waste of time, and one of my absolute biggest regrets in the entire project. This is the purpose of this postmortem: to learn from this mistake that cost countless hours of valuable development time.

I decided that it would be cool to make this piece of software as portable as possible, so it was easy to install on any system my employer would ever use. I followed a lot of the patterns that I’d seen in other self-hosted web applications. Despite the fact that all of our apps only used MySQL as a database server, I wanted to make it possible to run it on MySQL, PostgreSQL, and SQLite. I also wanted to make this feature a library that I could use in other projects, and other people could use in theirs, so I did release that code under the MIT license on GitHub.

That idea, in itself, is not always bad. There are truly some occasions where doing such a thing would actually be valuable. They are all relational databases, so despite some syntactical and feature differences, they operate on the same core concepts. The way I decided to implement this cross-sql-platform support was by abstracting the entire concept of a relational database into PHP, and having that PHP act as a code generator writing SQL for prepared statements and submitting the relevant data. Essentially the idea was that instead of writing SQL queries, I would have database objects with methods allowing me to modify the structure and data in the database. The library would have to be pretty intelligent in figuring out how to safely and efficiently write a SQL query to perform the requested operation, so there were certainly security and performance risks associated with it.

I spent months of time adding each feature and testing them one by one as I started to integrate it with the existing code base. For all of the features that I actually integrated and tested, it actually worked quite well. The problem, though, was the incredible complexity of the system. SQL is implemented as a full programming language because of that complexity, and somehow I didn’t grasp that at the beginning of the project.

At some point, well into the project and not making the kind of traction that I wanted to make, I kept the library code on GitHub as before, but I scrapped integrating it into the asset management system, instead opting for another database abstraction design that I had learned and found to be much more reasonable for a project that is unlikely to ever be moved from the RDBMS that it started on (but is still flexible enough if it does). That approach was to create a single static class representing the database, with very basic abstraction methods as private, utilized by the public methods representing more high-level operations like inserting a new item, or editing an existing department. This means that all of the database-related code is in this single file, so it’s easy to change it whenever needed, but I can safely assume that any operation can be written specifically for MySQL. If the RDBMS needs to be changed one day, then the queries in this file would need to be rewritten, but the rest of the code base would not need to be touched, as the interface would remain entirely the same.

Most of the benefits of the abstraction library, at a fraction of the complexity. This design took far fewer lines of code to implement, and took a much shorter time. From the ground up, all features implemented in this static class, took less than half the time that I spent on a still-incomplete full database abstraction.

So the lesson to be learned here is what? Don’t implement a full database abstraction because it’s a waste of time? Absolutely not. Like I alluded to before, there are certainly some projects that would benefit from such a thing, and in those cases, you’ll just have to suck it up and trudge through thousands of lines and dozens of classes to make it work. But the question you should be asking before you go off on such a costly mission is an honest estimate of how often you expect the code base to switch its RDBMS. Maybe you’re marketing your software to a lot of varied users who may not want to install your database server of choice. In that case, you might want to consider it. But in my case, simply asking myself that question, the answer would have clearly been “probably never, maybe once in 5 to 10 years”. I should not have done it. Frankly, the cost of rewriting the queries in the static class MAYBE once would have been much less than the amount of time that I spent writing that database abstraction library.

So remember the YAGNI principle during the design phase of your project. Whenever you see something potentially complex getting added to the design, make sure to ask yourself, “Are you really gonna need it?” and if the answer isn’t a certain “yes,” then just remember that it’s okay to rewrite things later on when you have more information. Refactoring your code later just might be a lot cheaper and better than worrying about the minute details so early on. The finished product is rarely the same as the finished concept anyway, so be an adaptable programmer first, and write an adaptable code base later.