When applications are built, the business information scaffold is captured by the engineers, who create data structures by way of which a particular subset of the information can be stored, manipulated and if necessary communicated, Illustration 2. When such communication need arises, the engineers of the two applications agree on a common structure, which is used to encode information onto simple, highly standardized protocols, like HTTP, TCP, IP, Ethernet.
Illustration2, Common interface, definition (source code) perspective
Illustration 3, Common interface, execution (compiled code) perspective
Information is transferred as an unstructured payload of bytes via these mediums and is then decoded based on this common structure or interface after which, the received data becomes available for processing at the destination. All these commonly used interfaces are grouped together in an API (application programming interface) and serve as a protocol for communication between the applications of the two parties (The Principles of a Semantically Rich Data Representation System, Stefan Harsan Farr, The Principles of a Semantically Rich Data Representation System, 2013)
But once the application is compiled and deployed the semantics of the information that are stored in the code are lost to the software the very same way they become lost in the case of functions. Names are removed and any semantics associated with them that would be capable to confer it information status is destroyed. Whatever is put in there it becomes data. As such, a human is needed at each and of the underlying applications to input and interpret the data and turn it into meaningful information again. The vast majority of data manipulation software today are not concerned with the meaning of the data that they manipulate. After the information passes the user and enters the system, it turns into a meaningless structure like in Illustration 3. This way it is imperative that an exact match exists between the structures used for information interchange, because it is the only constant that exists at that point (during application run-time) within the application. Failure to adhere to that will result in the data becoming deteriorated and the resulting information will be corrupted as it would be based on erroneous data. This strictness of the system and the lack of persistent semantics is an enormous impediment in the standardization of this layer of communication.
Needs regarding information capture and encoding are enormously various, there are as many as there are observers. Different businesses capture different characteristics of certain concepts, for example, a financial institution like a bank, would be interested in financial aspects of person, like income, employment, assets owned, a health institution like a hospital would likely be interested in things like body mass index, age, history of diseases in the family, allergies, etc, whereas an institution like an insurance company would probably be interested in a subset of both. There is no one single structure that can universally define a person, so if the three institutions are to interchange information, they need to agree on a strict API that is usable to all three. Unfortunately as the number of the potential participants in the communication raises and their focus broadens, the common interface becomes so clunky that it would be very cumbersome to use. Even if such, all encompassing interface, could be built and enforced, there still is the potential of a future business that would need yet an extra feature. This would invalidate the standard in place and generate an extraordinary effort to bring the industry up to date with the new standard.
Due to this lack of standardization, communication is limited to prearranged, pre-programmed, interfaces that are build into the software and very costly to change.
Ironically, many businesses in the world have in fact the same or very similar needs, some are even communicating using interfaces that are similar or exact matches of other, unknown businesses, yet the lack of context in the system blocks the capability of matching up these APIs, which leads to a continuous reinvention of the wheel and perpetual need for human intervention to discover, convene and integrate the communication protocols.