SPInDL is a knowledge representation language, not an ontology, in fact, SPInDL does not have ontologies, at least not in the sense of what ontologies are in today’s KR systems.

Account for the properties of information

Ontologies like FOAF define types / classes (example: 26), having properties and inheritance between between classes, very much like the structures of the classical programming languages, an approach that has been shown to be incompatible with the subjectivity and incompleteness property of information.

  • Example 26
Class:			foaf:Person
Subclass Of:		foaf:Agent, foaf:Spatial Thing
Properties Include:	plan, surname, geekcode, pastProject, lastName, family_name, publications, currentProject, familyName, firstName, workInfoHomepage, myersBriggs, schoolHomepage, img, workplaceHomepage, knows

When it comes to knowledge representation in human language, a class / type does not exist as a prerequisite to store information about something. Types or classes or sets do come into existence as part of an analyses process, when a collection of objects are analyzed based on the commonalities they have, but otherwise they do not exist, because they would limit the capability to store information about the objects in discussion.

On the same token, human language does not have a limited amount of relations that can be used to describe objects, such as properyOf, or subclassOf. In reality, there may be an infinity of relations between elements of reality, attempting define a set of relations that constructs the structures of the reality will again limit the variations of knowledge that can be captured about the underlying reality.

The way these ontologies are created is the exact opposite of how knowledge is represented in human language. In human language structuring follows analyses, and it affects only the result of the analyses, it does not have any effect on the source of information. In these ontologies, structuring (the creation of classes and relations) precede the existence of the source of the information, in fact the source of the information is a materialization of the structure itself, consequently it will limit the reality (the source of information) to whatever it is captured in the structure. No additional a-posteriori information can exist about the source of the information within the realm of this ontology unless the ontology itself is updated.

By contrast, SPInDLE has no definition types, only two meta-types and two meta-relations, but the way these are used allow for an infinite number of relations, even future relations, without the introduction of new terminology or the need for modifying the language itself.

A common reality

As a direct consequence of the constructive approach of the ontologies’, combined with the subjective way industry participants handle information, web 3.0 is now the scene of an extremely complex architecture of types and relations originating in different ontologies each trying to reconstruct human reality according to their own specific view. The concept namespace, which originates in the development of the XML standard (a structuring standard) is ported into the ontology world and it is being used to denote ownership of the ontology, basically it attributes ownership of the way reality is constructed to a certain organization who took upon itself the responsibility to define it. The problem stems from the word constructed (emphasized in the previous sentence), because there would be nothing wrong if the namespace were to define how organization xyz perceives reality, but this is not the case. To re-iterate the graveness of the problem that this approach creates let us consider an example from Physics.

Physics is the discipline that studies reality, different laws explain how different forces of nature behave. Some models are more complete then others and so, they are able to observe different more profound, aspects of reality. Such would be the example of Newton’s law of gravity (newton:gravity) and Einstein’s law of gravity1 (einstein:gravity). The way ontologies are constructed now, gravity (a force of nature) would be a direct result of Newton’s respectively Einstein’s law of gravity and not the other way around, resulting in two different universes, with two different gravitational forces, one created by one of the models and one by the other. In this scenario there is no one gravity (the force of nature) observed by two different models, but two forces of nature, created by two different models. Some say that an upper ontology is needed which creates a new book of rules where we can specify relations between all the different models by saying something like newton:gravity sameAs einstein:gravity, but the concept itself is misguided. In fact newton:gravity and einstein:gravity are not the same, they never even were intended to be the same. They may refer the same concept from reality, but this is not what the “sameAs” relation states. The paradox rises from the fact that the semantic relation sameAs is being used in the wrong context. SameAs assumes that the models observe physical reality, but the model in which sameAs is used, the models in fact define physical reality. The set of facts based on ontologies in web 3.0 are the web 3 reality from the perspective of applications that operate with them, the same way as the totality of documents and their connections on the Internet is the reality for a search engine, crawler, web browser suit. It might be more difficult to notice the similarity but foaf:Image and dcmitype:Image behave exactly in the same manner as newton:gravity and einstein:gravity and an Internet of foaf:Images would be an alien reality to a browser built for dcmitype:Images.

This upper ontology approach is not only used in a misinterpreted manner, but it is also impossible to track within a large enough pool of ontologies, because it has a backwards nature. Suppose we have newton:gravity, einstein:gravity, abc:gravity, bcd:gravity, etc:gravity as different interpretations of the natural law of gravity. Then, organization ABC, defining abc:gravity, must constantly monitor this upper ontology defined relations to make sure there exists a sameAs relation with all the laws of gravity that ever were and ever will be. One cannot rely on the transitivity of the sameAs relation because then one must also rely on the fact that any new organization XYZ contributing a new theory to the same law of physics will make sure to specify the sameAs rules accordingly. The question remains, up to how many variations of how many concepts can such a relation be tracked? Wouldn’t it be more simple to standardize gravity firsts, as (the law of nature), supreme, existing outside of any definition and then relate this custom models of the law of gravity to it, (newton:gravity describes Gravity), (einstein:gravity describes Gravity), etc. Then all these models would implicitly have something in common, a common concept, something akin to a reality.

***
Knowledge represented on human language follows this common reality model. Everything that is within a language has a correspondent in reality of humans: objects, time, space, perceptions, emotions, activities, et cetera. Additionally the vast majority of reality is the same for all humans so interestingly enough, even though many languages formed far apart from one another, they are still largely compatible, given very few exceptional situations (see Common Meaning In Human Communication).

 

Illustration 4: Reality – Terminoloy substitution
Illustration 4 schematically depicting the world of human communication, emphasizes how every concept from the common reality, first passes through a subjective representation in the brain of individuals but in order for communication to be possible, ends up again as a unique objective term in another common realm, language. As opposed to computer reality, language does have the benefit of having that objective reality against which it is constantly synchronized and as such, there can be any number of languages, as long as they are synchronized against a common reality, translation between them is possible.

In the computer reality, which lacks the fundamental common reality of humans, on account that computers are unable to perceive it, we could substitute our reality with our language terminology. It would no be a complete substitution, there would be many concepts that are contextually referred to in language, but it would be none the less be a common reality: so instead of a common reality, computers would use a common terminology. Whilst in the human world the common concept goes to the word in the brain which is connected to the concept in the brain which is an exact correspondent of the actual concept that exists in the reality, Illustration 4, in the computer world, the convergence will end at the “word”.

The benefit of this approach is that there is no need to construct or impose a standard. Language already is a de facto standard present in every single culture, business, or any other group of people for that matter.

This would be a fundamental change with regards to how information is looked at, because the perspective changes from structure to concepts. The priority would not be any more how the data is structured, what properties a particular ontology captures, such as the case of foaf:Person, but rather what the data represents, in this case Person as defined in the language dictionary. The association carries no structural information, therefore, there really is no need for defining particular kinds of persons. As such, foaf:Person and abc:Person would not be defining the concept, Person, that would already be defined in the language dictionary, but rather they would refer to the common concept Person and would only elaborate on the particularities of their view.

From a communication perspective this would have an enormous impact. There is no need for any elaborate repository system (UDDI or upper ontology), because when communication occurs the term would automatically carry the concept with it. Bob’s and Alice’s applications that both handle data about the concept of Person under their own particular structure will be able to exchange information without ever being synchronized via an API, upper ontology or similar repositories because they refer a common reality, in which Person has a very specific meaning.

The very fact that the word itself carries the reference to the concept in the common reality is already some degree of semantics. Imagine that Alice is browsing through a set of data Bob has shared with her, and within it there is a list of Persons, which Bob’s application is designed to manipulate, but Alice’s computer is not. The fact that Alice’s computer can rely on the fact that Bob’s Person, is what the dictionary says it to be, and the fact that Alice is fully aware of all the words in the dictionary (even if most of them don’t have any implementation associated), it can safely receive the information from Bob’s computer and present it to Alice. This would be totally impossible to do in the classical way because a type in an API is not constrained to a concept of any kind, is a self contained entity. To give an example, abc:Person does not necessarily have to capture information about Persons, it could just as well be Oranges, if the organization behind the type would not care to give their types suggestive names.

Simplicity & Familiarity

The other trend that is enormously detrimental to web 3.0 is the terminology invention, briefly touched at in chapter (1.5.5). Every single API and ontology that currently exists is dominated by this trend. Types, properties and relations like the one in example: 26, define their structure with compound expressions that although have resemblance with dictionary terms they preserve only partial connection with these: pastProject, lastName, family_name, publications, currentProject, familyName, firstName, workInfoHomepage, myersBriggs, schoolHomepage, img, workplaceHomepage, etc. The same can be observed in BFO upper ontology (SnapEntity, TimeInstance, TemporalIndex, Ontology, TimeRegion, SpatialLocation, TemporalLocation), UMBEL (umbel:isAbout, umbel:correspondsTo), SKOS (skos:prefLabel, skos:altLabel, skos:hiddenLabel, skos:definition), CycL (#$relationAllExists #$biologicalMother #$ChordataPhylum #$FemaleAnimal) and all the others.

It is important to observe that these terms are resemblant of words found in the language dictionary and so they appear to refer concepts from our reality. This resemblance however, is highly misleading because these expressions are in fact completely new terms within their own world, with definitions describing their role within the ontology definition that may or my not look or be similar to the one in the language dictionary term that looks similar. They couldn’t even be any other way, because often times these are root concepts within the ontology having no reference what so ever to other concepts within the ontology that could have some explanatory nature within the ontology itself.

  • Example 27, FOAF Person
<foaf:Person>
 <foaf:name>John Doe</foaf:name>
 <foaf:workplaceHomepage rdf:resource="http://www.john-does-homepage.com/"/>
</foaf:Person>

workplaceHomepage in FOAF is a completely new concept that has no other explanation that the human readable one in the specification. There is not such thing in foaf as Workplace or Homepage. Additionally the use of words is not regulated either. There is nothing in any of these standards that dictates that a word which is identical to the one in the language dictionary must have exactly the same meaning as in the dictionary, which is in fact the real world meaning of that word.

The new, invented word, is a root concept within the ontology and anybody who wants or needs to use it needs to understand it first by reading the specifications. WorkplaceHomepage is one that is quite suggestive, even obvious, having a very good choice of words, but they are many in different ontologies that are much harder to understand: isAbout, correspondsTo, #$relationAllExists, etc. that have complex ontology level meanings (the way the intersection sign ⋂ has a special role in set theory). These are really impossible to understand unless the specification is studied and understood.

Collectively these ontology definition languages and ontologies generate tens of thousands of such new expression. It is unreasonable to expect that any system could work with such an avalanche of terms especially when these are not linked to an existing reality but rather generate one of their own. A system is needed that relies extensively on the real world meaning of the words, one that is based in the common reality of humans, only than will it be possible for it to become popular within large groups of people and create the dynamics that will ultimately generate the web 3.0.

Openness

The human reality is huge, enormous, it has a vast amount of objects and possibly an infinite amount of possible relations between those objects. Any system that is to cope with such a reality cannot rely on predefined types and predefined relations, simply because it is impossible to account for all possibilities when these possibilities have no limit or are possibly even unknown at the time of definition (like future concepts / relations).

If we take a look at human language, we can observe that it evolves in tandem with human needs, human reality in a perfectly smooth, seamless manner.