New Feature Penalty Law

In most cases when creating new functionality in a technology (hardware, software, etc.), there is going to some types of 'cost' (i.e. performance, complexity, storages, processing, failure, etc.) in the design of it. This is what I call the Feature Penalty Law, which basically states that 'almost all new functionality comes at a cost of something else'.

A basic example, It use to be very expensive to buy highly redundant systems were expensive because they used custom and prioparty hardware and software. These systems were generally composed of two or more physical systems running together to create one virtual system. If the first system failed it would fail over to the 'hot' backup running the same processes would take over and there was no down time.  I have heard stories about where in a "Tandem Computers" literally of fo the system caught fire, and no one knew it happened (I don't know if the story true or not, but it makes my point).

Google took a different approach to creating highly redundancy systems, they choose to use commodity hardware and open/close source software to tie together lots of smaller systems to create a larger one. This approach offered the ability to create high scalability (from a CPU, RAM and storage perspective). But came at cost of hosting thousands of smaller system.

The benefits and disadvantages, of these two methodologies are too large for this article, but I point is clear that in order to get new functionality almost always is some type of "cost" that has to be paid.  When you add new functionality, you have to look at a few different factors, one is the 'return on investment' (ROI) [i.e. is the new feature going to profit you somehow], also what is the 'cost and benefit' (or disadvantage) of the new functionality?