Bleeding edge technology is a category of technologies so new that they could have a high risk of being unreliable and lead adopters to incur greater expense in order to make use of them. The term bleeding edge was formed as an allusion to the similar terms "leading edge" and "cutting edge". It tends to imply even greater advancement, albeit at an increased risk because of the unreliability of the software or hardware. The first documented example of this term being used dates to early 1983, when an unnamed banking executive was quoted to have used it in reference to Storage Technology Corporation.
A technology may be considered bleeding edge where it contains a degree of risk, or, more generally, there is a significant downside to early adoption, such as:
The rewards for successful early adoption of new technologies can be great in terms of establishing a comparative advantage in otherwise competitive markets; unfortunately, the penalties for "betting on the wrong horse" (e.g. in a format war) or choosing the wrong product are equally large. Whenever an organization decides to take a chance on bleeding edge technology there is a chance that they will be stuck with a white elephant or worse.
Bleeding edge computer software, especially open source software, is especially common. Indeed, it is usual practice for open-source developers to release new, bleeding edge, versions of their software fairly frequently, sometimes in a rather unpolished state to allow others to review, test, and, in many cases, contribute to it (beta testing). Therefore, users who want features that have not been implemented in older, more stable releases of the software are able to choose the bleeding-edge version. In such cases, the user is willing to sacrifice stability, reliability, or ease of use for the sake of increased functionality.