As a response to the lack of meaning in these API based communication systems, which was observed quite early in the development of the Internet, a new standard started to emerge.

The Semantic Web was coined in 1999, around the same time as the B2B and is defined as:

“a web of data that can be processed directly and indirectly by machines.”
Quote 3, Tim Berners-Lee

by the inventor of the World Wide Web, Tim Berners-Lee and it was / is suppose to revolutionize information search over the Internet (hence the moniker Web 3.0). To achieve this, frameworks have been created like RDF (resource description framework) and OWL (Web Ontology Language) that can be used to create semantically charged data that can be published over the web. These are necessary because computers cannot process natural language for their content and searches are based on keyword matching and different tricks which don’t yield results as expected. Similarly to the B2B promise, the Semantic Web promises a world in which machines can automatically process information on the web and return meaningful answers, not just answers matched by character comparison.

Unfortunately, the structuring trend is so prevalent in the IT discipline that even developers of semantic web, fall into the same trap of defining terms that are only humanly intelligible. Frameworks like RDF and OWL are by nature the same structuring languages that allow definition of types and type hierarchies (ontologies in this case), as seen in the case of XSD. The difference is, that in the case of ontologies, the types are supposed to carry semantic charge because the model puts some emphasis on the terminology and the relation between elements.

It will be shown later in the work how this semantic charge it is in fact impossible to achieve the way the ontologies are constructed today.

The body of people, that create such ontologies, are responsible to embed these semantic information into the ontology in such way that machines can automatically make use of it. Unfortunately, the misconception seems to be present, that terms and relations alone, actually hold the semantics associated with the object. Evidently, when humans look at these terms and relations they all make sense, but they make sense in the observers intellect not necessarily within applications. In human world, terminology is important not because it is semantic in itself, but because it links to us, to our reality (objects, actions, et cetera), which in turn is semantic to us because we can operate with those objects and relations.

By analogy, given a specific ontology, the semantics will not lie in the terminology and definitions listed there; that is only semantic to humans. It will lie in what applications are able to do with that ontology in an autonomous manner. From this angle, this linked resource model is no better in terms of machine semantics than the classical model, using the APIs, it is just different: it does not operate on structures and primitive types, it operates on terms, objects, types and relations between them.