Talking Tech - The Case Against Bleeding Edge

A lot of people in computer science and software engineering share the opinion that bleeding edge is always the best option for any new project. Newer is better, right? I would argue that this isn’t consistently the case. Occasionally newer is better, but often the reliability and availability of documentation makes older technology the superior option.

Some people may not be familiar with the term “bleeding edge,” as it seems to be pretty specific to the information technology industry. “Cutting edge” is pretty universal, but “bleeding edge” suggests a step further. Something that is cutting edge may be appropriately described as “latest and greatest;” bleeding edge is even newer, albeit maybe not greater specifically. The difference in my mind is that cutting edge is very new but already widely adopted, where bleeding edge is so new that most organizations haven’t yet begun using it.

Bleeding edge technology creators and supporters often tout the vast improvements over existing options in the space. They’ll quote new features and performance gains, and by all measures seem like the obvious choice. If the new technology can do all these extra things in half the time, why would you consider anything else? The answer is risk.

In the business world, risk is one of the worst things a project can encounter. High costs that are known can be anticipated and mitigated as much as possible. Predictable trends are comfortable and easier to sell with stakeholders. Unknown factors that will potentially introduce issues, push back deadlines, and introduce new costs are worse. Therefore, most business executives will go to great lengths to minimize risk.

New technology often comes with substantial risk. What if too few developers adopt the technology? What if our project encounters some problem with the technology that the developers must fix? What if those developers choose not to fix the problem and we’re stuck with a technology that does not satisfy what the business needs? These questions are nightmare fuel for executives and a big reason why bleeding edge technology is so concerning.

A quick look at the technology adoption curve will show us the types of groups and individuals that might be more keen to gravitate toward bleeding edge technology. Innovators (the first 2.5%) and early adopters (the next 13.5%) are more focused on gaining an advantage in the market, and thus are more willing to accept risk (Rogers, 1962). In contrast, early majority, late majority, and laggards will respectively adopt the new technology later and later, with less of a tolerance for risk.

Some might suggest that this first to market mentality is worth the potential risk of a technology that does not satisfy the needs or the organization or one that falls to obscurity disproportionately early in the lifespan. I would argue that technology is a tool or a means to an end, and that the idea or product is the real marketable component here. The same idea created in multiple different architectures is going to be comparably successful regardless of technology.

A product with reliability issues? That’s a definite way to stifle success. Bugs that are a result of the underlying technology? Have fun selling that to your executive leadership team that backed you on the decision to take the bleeding edge option. Even if the newer option might be slightly more optimal, a consistent and reliable platform with ample forums and documentation is going to be more proven and likely provide a better developer experience.

Shortcomings of “old” technology are primarily its reduced lifespan and potentially inferior feature set and performance. Eventually, all technology is deprecated. It’s just the way of the industry. Every single language and architecture that you use today will eventually be discontinued and replaced with something else. The later you adopt a technology, logically the less time you have before that technology is deprecated.

Feature sets are worth mentioning, but it comes with a lot of caveats. Features are fun in that it always sounds better to have more features, but if you aren’t actually using those additional features, it doesn’t make a difference. New technologies might advertise laundry lists of features that their predecessors lack, but many or even most teams may never actually realize those as benefits.

Performance is certainly a case where faster is universally better, but I would argue that many users would never actually appreciate the difference unless it is substantial. For business cases where millisecond improvements are critical, sure, go with the more performant platform. However, I’d argue that the vast majority of modern applications won’t make or break on a few seconds’ difference. Better performance is never a bad thing, but it may not be worth rewriting your entire applications for some nearly imperceptible improvements.

One important distinction is personal projects versus work projects. The benefits of going with something tried and true instead of bleeding edge are much more relevant to work projects. When I say “work projects,” I don’t just mean the things you do for your 9 to 5 day job. This could be freelance work, contracted efforts, or anything that you are doing for a corporate entity of some form. Personal projects are things that you strictly do for your own purposes, be it the love of coding or a software idea that you would use for personal benefit. If it has a deadline, it’s a work project. If you’re getting paid for it, it’s a work project. If a business has requested the code, it’s a work project.

I make this distinction because personal projects are just that - personal. If you want to go bleeding edge to feel out a new technology, go nuts! If you want to write something in IBM Series/1 Assembler just to see if you can, be my guest. Personal projects are really a space to try new and different things, so it makes more sense to use bleeding edge technology because there is less on the line. You can’t really miss deadlines for a project that doesn’t have deadlines anyway.

All in all, the decision for what technology to use with a new project or when to migrate an existing project to a new technology falls to the team, the management, and the business. It is an important conversation to have, and gaining support from all areas is critical for buy in. Bleeding edge technology may be appropriate in some cases, but it is vital to not discount an option simply because it is older. Those older platforms often have better documentation and many of the kinks have already been worked out. Think twice before you dismiss old reliable.

References
Rogers, Everett M. (1962). Diffusion of innovations (1st ed.). New York: Free Press of Glencoe. OCLC 254636.

Comments