The Future of the Digital Ecosystem – Part 3: How?

Each digital twin has a reality theater with a stage on which to perform in collaboration with other digital twins (persons, organizations, things, …). A theater is an instance of the OtterServer that applies OWL DL for consistency in shared knowledge and a common language dialogue for interoperability across the internet.

The semantic web provides OWL DL as a common language for knowledge definition. Knowledge definitions in OWL DL are verified by reasoners for logical consistency. Reasoners are computer code designed to efficiently uncover any inconsistency. For example, it would be inconsistent to classify a shape as both square and a circle since a square and a circle are mutually exclusive.

The diagram above shows the four theater operations of the OtterServer: lights, camera, action and perform. The arrows in read indicate how the OWL class expression is applied by the OtterServer where it defines a graph.  

The class expression is the formal language for querying and updating the OtterServer graph database. It is also the formal language for defining Service Component Architecture (SCA) dialogue and the parts played by the actors (digital twins).

Lights – A logical composition of a subset of knowledge. The composition is selected from the Digital Tree of Knowledge by a person. In DL (Description Logic) lights are the terminological knowledge; abbreviated in ontology as the TBox.

Camera – Captures points in time of the named objects and stores them in a graph database. The data stored is referred to in DL as assertions with the name ABox.  The storage model is derived from the TBox and queries and updates are defined by class expressions. In IT terms, the logical model is defined by the Abox and the physical structure of the database is determined by the actual content in the ABox.

Action – The structured communications between actors; the scripts they follow. Scripts are the procedural statements that define the actions of components that may contain one or more scripts. These actions include providing services and requesting services. Component scripts include functions, conditionals, parallelism, and query / update of information maintained in the ABox.

Perform – The real-time performance in the ecosystem. Service requests and database requests are received and processed in a multi-thread environment.

The “How” is 100% OWL DL. It does not use a separate language for database query and update, nor does it use a separate language to define Service Component Architecture. This makes it easier for individuals and organizations to contribute to the Digital Tree of Knowledge. They need only to learn one language for data definition, database, and interoperability.  One language where its content can be verified for consistency with existing library content.

Part 3 of this article introduces the concepts of the functionality implemented in the OtterServer prototype. This introduction is very high level and does not attempt to cover the foundation of technical standards on which it is based and how these standards are applied. This information can be found in other articles found at

The future offers great opportunity for every person to use their talents, time, and resources to serve others. It will come in the form of a person digital twin for all those who would like to have a personal advisor to help them find their true north.

The digital ecosystem will change. This article covered the “What”, “Why”, and “How”. So, the next question is “When?”. The answer is simple; like all other recent technology changes, this one will take place very rapidly.

The Future of the Digital Ecosystem – Part 2: Why?

Each person enters the world with the gifts of time, talents, and resources to use to serve others. The digital twin offers a person a positive personal experience by calibrating their true north, providing guidance, and monitoring their success. 

As shown in the diagram above, those that are seeking purpose and want a fuller life will find great personal benefit in having a digital twin to guide them on their journey. A personal “Wheel of Life” is a true north compass formed uniquely for each person. “A compass provides the ideal metaphor. Just as a compass points toward a magnetic field, your personal “true north” directs your path and pulls you forward.”

Although the concept of a balanced life is probably thousands of years old, Paul J. Meyer is credited with applying this concept in the very successful organization he founded, Success Motivation Institute. He developed a model for guiding others in personal growth, development, and success. “Meyer held the firm belief that all people, regardless of their gender, personality, social standing, or level of education, could develop the necessary characteristics to achieve and live a lifetime of success.” Paul J. Meyer (Follow this link to learn more about his amazing contributions.)

The digital twin will be equipped with the capabilities that Meyer envisioned and taught. Through the application of AI, a digital twin will match goals to reality and identify alternative paths. Computing the effort and risk of each step in the path helps an individual to navigate and stay true north. Although much less complex, is like the navigator software used to find the best route from one location to another.

A person can see their pursuit of their true north in the past and in the present. Guidance comes through the prediction of the outcomes when selecting future options. Each option may take many paths that may be examined through future time spans. These alternate paths can be experienced in digital form. A person may simply be presented with a list of probable outcomes or they may submerse themselves into a 3D world and experience multiple paths like playing a game.

Discovery of talents, measuring time, and monetizing resources is aided by monitoring Maslow transactions. In addition, talents may be uncovered by gathering data on personal areas of interest. Time may be captured from a person’s calendar, work schedule, and events. Also, resources may be determined from the personal financial data available to the digital twin.

Some possible calibration calculations might be:

  • Talents may be improved through education and time performing tasks. Job promotions and education acquired might enhance talents.  (Leveling Up)
  • Professions could have a weight for average benefit to a person served.
  • Time consumed within some professions may carry an average number of people served.
  • Corporate employees could be allocated the number of people served by a percentage of all people served multiplied by the percent defined by each department.
  • Money donated could be rated by the average percent of donations / income.
  • Number of people served by donation amounts would be provided by the receiver.

This previous list is only provided to help to get an idea of how true north calibration might be achieved by a digital twin. The actual calibration will be defined by the:

  • data captured within Maslow,
  • the projection of Maslow data to the Wheel of Life data, and
  • the knowledge given to the twin by a person’s selection from the Digital Tree of Knowledge.

So why have a person digital twin? It is simply because each person would receive great benefit from having a personal advisor to help them determine their true north and help them to achieve the best use of their talents, their time, and their resources. Having a personal advisor is the primary motivator to having a digital twin. Having personal and private data is a very positive outcome, but it is not a motivator.

With such an overwhelming value to having a personal advisor, it would seem there would already be internet services available. And, of course there are. They exist just like those shown on the left side of the diagram in Part 1. You share your most personal information and they will score you and provide “Wheel of Life” diagrams. If the service is free, then they will share your information so others may market their products directly to you based upon your interests.

To provide a truly personal digital twin, it must be a single person’s voice in the digital ecosystem. It must be able to converse with other digital twins in support of the person’s personal fulfillment. There must be consistency in language and content where players in a conversation apply the same subject matter knowledge.

Part 1 of this article described “What” and this article, Part 2, describes “Why”. Part 3 will present “How”. The “How” defines the interoperability required between players by applying semantic web standards.

The Future of the Digital Ecosystem – Part 1: What?

In the future, a person will enter the digital ecosystem through their digital twin. Their digital twin will be a portal managing all conversations with other digital twins of persons, organizations, or things.

This is a future of the democratization of the Internet where each person takes control. It represents a shift from the present, where actions are not personal nor private to a future of personal and private interaction.

A digital twin will strut upon the digital stage with a full knowledge of all the other players. It will provide a person with data privacy and internet assisted personal fulfillment. As shown in the diagram above, there are key aspects to the performance of a person digital twin:

  • Person – A single real person.
  • Digital Twin – A real person’s unique representation on the web, structured around satisfaction within Maslow’s hierarchy of needs.
  • Digital Tree of Knowledge – Description logic and processes to support definitions and functions to satisfy a person’s needs.
  • Internet Servers – All servers accessed by the Digital Twin such as Education, Financial, etc.

In the theater of performance of a digital twin, the following capabilities will be provided:

  • The Digital Tree of Knowledge will be an open repository for access and contribution. Like a library, it will have a full classification system that defines the structure of the tree.
  • Only the real person can select the knowledge from the Digital Tree of Knowledge to share with their digital twin.
  • The Digital Twin performs all actions assigned by the person and stores all private information.

One current effort that closely aligns with a person digital twin is the Solid project of MIT:

“Solid is an exciting new project led by Prof. Tim Berners-Lee, inventor of the World Wide Web, taking place at MIT. The project aims to radically change the way Web applications work today, resulting in true data ownership as well as improved privacy. “

This project recognizes the need for a radical change as described in this article. Yet security and privacy, although extremely important, are not enough to motivate people to adopt a radical new change. There must be a much stronger incentive.

Part 2 of this article will describe why people will transition to this change. The “Why” is found in the personal fulfillment gained by having a personal twin, where each person discovers and lives by their true north.

How to Educate Your Digital Twin with Blockly

Following the successful use of Google Blockly to build OWL class expressions, the next Otter Project effort is to move forward with building all description logic with Blockly. The outcome will be a full implementation of the edit functionality needed to educate your digital twin to perform in the digital ecosystem.

Knowledge vs Learning

The first step is to formalize the definition of knowledge building for AI. For the Otter Project, this definition has been extrapolated from the Knowledge Building theory attributed to Carl Bereiter and Marlene Scardamalia, two professors from the University of Toronto. Although they address knowledge building from the perspective of how human beings build their knowledge, their model is also applicable to AI.

According to their theory, knowledge building in education is either done in belief mode or design mode. Belief mode is what we are told to believe, based upon publicly shared knowledge. Design mode incorporates actual experience that fosters improvement. Consider that in AI, the same distinction exists. Building OWL DL documents is based upon shared beliefs, while training neural networks is based upon design by incorporating actual experience.

OWL DL documents represent the capturing of knowledge by humans to share with computers. Neural networks are essentially trained by repetition. The methods applied for training continue to improve so a computer can learn from big data observations.

Cognitive Artifacts

Bereiter and Scardamalia describe knowledge building as a way to create new cognitive artifacts. They recognize the importance of the community in building public knowledge artifacts, and of individuals applying innovation to build new artifacts. In the OTTER project, this concept is formalized by layered groups of knowledge topics for OWL DL documents as listed below:

  • Innovation – Unique and private topics building upon Business, Academia, and Universal topics.
  • Business – Topics defined by the North American Industry Classification System (NAICS) and building upon Academia and Universal topics.
  • Academia – Topics defined by the Classification of Instructional Programs (CIP) and building upon Universal topics.
  • Universal – Common topics for classification, processing, and federation as provided by the Otter Server.

Each OWL DL is classified in one and only one of a given topic.

Knowledge Building Graph

The following is an initial implementation showing a knowledge building graph. The Pizza Stores DL topic from the Otter Server prototype is selected as the focus for the topic filter. The visible layer options of Innovation, Business, and Academia are set on where Universal is set off.

Setting Business and Academia off and setting Universal on shows the Universal topics applied in the Pizza Stores OWL DL document.

The Knowledge Building graph is dynamically created based upon the selection criteria and the dependency of a OWL DL document on another. Dependency is based upon the imports defined in each of the OWL DL documents.

Knowledge Building Artifacts

In the OTTER project, these artifacts exist within Lights, Camera, Action, and Perform. Each of these will implement Blockly edits to visualize, create, and update content artifacts. The list of items in each are an indication of what is to come.

Data Properties
Object Properties
Query Database
Request a Response

This is the beginning implementation of building AI knowledge by the Otter Project using Google Blockly. It is the next step towards achieving the education of your digital twin.

Blockly Experiment

Logic brings lights, objects bring the camera, and services bring the action. Together the performance is staged by the Otter Server. Blockly may be an ideal editor for Otter Server content.

The Blockly visual development process is currently being evaluated as a possible method for providing edit capability to the components of the Otter Project. A sample test is being run using a prototype development of Blockly blocks for constructing logic, services, and objects. Readers can view the test and try it for themselves at:

Motivation to Use Blockly

Blockly is an editor for visually constructing logical statements by connecting 2D blocks. Whereas most ontology editors are either forms-based or text-based, Blockly is picture-based. All blocks have a common shape, and each supplies content and connectivity. However, each individual block has its own unique content, connection restrictions, and differentiating color.

Blockly provides localization so the same blocks can be presented in multiple languages. The approach used in Blockly for localization may also be extended to ontologies.

Starting with Objects

The first test is with objects, as defined by the standard for Service Data Objects. In the Otter Project, objects are defined by OWL class expressions. The test shows how Blockly blocks can be combined to build a class expression, and that the Blockly visual can then be transformed into a text-based class expression. The Blockly blocks may also be created directly from a text-based class expression.

Objects was chosen as the first area to experiment with Blockly due to the importance of class expressions to the Otter Server. Class expressions are the language used to access and update the Abox database. Class expressions are also used to define all the messages in service dialogues.

Next Steps

  1. Finalize the blocks for objects.
  2. Complete localization for the blocks and the OWL DL ontologies.
  3. Develop the blocks for services as defined by the standard Service Component Architecture and the Business Process Execution Language.
  4. Develop blocks for logic as defined by description logic and specified by the OWL language.


Sharing and reuse of multilingual description logic, processes, and data with consistency when combined.

Mapping OWL to an OWL Ontology

The OWL language is well-structured and meticulously designed for description logic. There are many sources of documentation describing its specifications with examples. There are also thousands of projects that utilize the OWL language. It is well documented for humans.

In the Otter Project, all ontologies are designed to be understood by both humans and computers. Humans can understand most anything, but computers are limited to their functionality. Since the Otter Server has a finite set of functionality, the OWLMap exists to reflect the existing Otter Server functionality.

OTTER utilizes this ontology to provide access to all OWL-defined ontologies. This ontology, developed as part of the Otter Project, has been through multiple revisions. The revisions have been required as the OTTER project has progressed.

All graph structures in the Otter Server are defined as SDO Datagraphs, and Datagraphs are created from OWL class expressions. The OWLMap ontology provides the Otter Server with the ability to use a class expression to access the content of all the loaded ontologies. The class expression is the base for queries, updates, and server messages.

The OWLMap ontology is not intended to be a complete representation of the OWL language. The structure of OWLMap follows the principle of “form follows function”. With this principle In mind, the purpose of OWLMap is to provide a form which can give full access to the functionality of the Otter Server.

There are twenty-six classes described in the OWLMap as of 3/11/18. The following diagram generated using d3.js shows the class content and the class relations:

When accessing this diagram within the Otter Server and focus is given to a single class, only its relations are shown in the diagram. Giving focus to a relation will show the relation as either a subset or as having one or more object properties (domain and range) or class relationships.

Selecting a class will bring up a diagram that shows the properties and relations of the class. The following is an example of the OWL_Entity class:

The object properties include: inDocument (The containing document), has Annotation (A list of annotations),and  isEntityOf (The class of the entity as a subclass of OWL_Document). The data properties include: hasName (The string name of the entity).

The OWL_Entity is the super class of: OWL_Document, OWL_Class, OWL_Datatype, OWL_Annotaion, OWL_Individual, and OWL_Property. Each of these classes inherits the object properties and data properties of OWL_Entity.

Another more recently added class is ExpType, which defines the content of a class expression as shown in the following diagram:

This class is the super class of the five assertion classes of a class expression. This includes: IndividualRestriction (An specific individual name), OWL_Class (An OWL class name), BooleanConstructor (Defines “and”, “or”, and “not”), DataRestriction (A data property restriction), and ObjectRestriction (An object property restriction).

The ExpType class was added to support the construction of a visual function to build a class expression. Class expressions can be very complex. The visual function builds a class expression in the tradition of “What you see is what you get.”

These diagrams are visual results of the use of the OWLMap ontology. The OWLMap is also being used to construct the class expression builder for both service messages and repository queries. It will also be used to construct the ontology builder.


The Tree of Knowledge

The best known method for sharing knowledge is a library system, which is a system that categorizes all knowledge components within a defined topology. In the OTTER Project, there is a topology with four layers: innovation, business, academia, and federation.

Innovation components are uniquely created to represent a digital twin of an entity such as an organization, a person, a project, etc. They are a combination of selected knowledge components from the other layers within the knowledge library.  They are considered private and users of the reusable components of the knowledge library.

For business, the NAICS codes (North American Industry Classification System) are applied. These codes are well defined and can be found at:

Academia applies the codes of the CIP (Classification of Instructional Programs). These codes and their full descriptions can be found at:

The OtterServer provides the infrastructure for the Federation layer that supports the  sharing of knowledge components consisting of:

  • Persistent data and its metadata,
  • Services and metadata, and
  • Applications for implementing services.

Using this topology, the Tree of Knowledge is formed for finding and cataloging knowledge. The tree has leaves, as defined by the NAICS standard for categorizing business types. The limbs are combined academic disciples, as defined in the CIP 2010. The federation roots provide consistency for a firm foundation.

The crosswalk that links NAICS codes to CIP 2010 requires the combination of multiple published crosswalks. This is a process that needs refinement relative to knowledge sharing, and will be a subject of a later post.

Tree of Knowledge

If your browser allows, you can click the image above to show a larger view so the text is more readable. Also, if you’re using a mouse, hovering over a leaf or a node where two limbs come together will show the list of contributing CIP 2010 codes.

The tree is dynamically created. This process begins with the business nodes. Nodes are combined two at a time and replaced by a single node. This continues until there is only one node remaining. This node is the trunk of the tree. The combination process first finds a node with the least number of assigned academic disciplines and combines it with a node with the most similar CIP assignments.

A knowledge component would be classified with a single code from business, academic discipline, or OTTER federation. For business, the knowledge component would be found in a leaf. For an academic discipline, a knowledge component could be found in multiple limbs of the tree as well as in the trunk of the tree.

The Tree of Knowledge only references the highest group of NAICS and CIP codes. The highest level of NAICS has the following codes:

11 Agriculture, Forestry, Fishing and Hunting
21 Mining, Quarrying, and Oil and Gas Extraction
22 Utilities
23 Construction
31-33 Manufacturing
42 Wholesale Trade
44-45 Retail Trade
48-49 Transportation and Warehousing
51 Information
52 Finance and Insurance
53 Real Estate and Rental and Leasing
54 Professional, Scientific, and Technical Services
55 Management of Companies and Enterprises
56 Administrative and Support and Waste Management and Remediation Services
61 Educational Services
62 Health Care and Social Assistance
71 Arts, Entertainment, and Recreation
72 Accommodation and Food Services
81 Other Services (except Public Administration)
92 Public Administration

Each of these codes is subdivided into more detail. For example: “22 Utilities” is sub-divided into the following categories:

    • 2211 Electric Power Generation, Transmission and Distribution
      • 22111 Electric Power Generation
        • 221111 Hydroelectric Power Generation
        • 221112 Fossil Fuel Electric Power Generation
        • 221113 Nuclear Electric Power Generation
        • 221114 Solar Electric Power Generation
        • 221115 Wind Electric Power Generation
        • 221116 Geothermal Electric Power Generation
        • 221117 Biomass Electric Power Generation
        • 221118 Other Electric Power Generation
      • 22112 Electric Power Transmission, Control, and Distribution
        • 221121 Electric Bulk Power Transmission and Control
        • 221122 Electric Power Distribution
    • 2212 Natural Gas Distribution
      • 22121 Natural Gas Distribution
      • 221210 Natural Gas Distribution
    • 2213 Water, Sewage and Other Systems
      • 22131 Water Supply and Irrigation Systems
      • 22132 Sewage Treatment Facilities
      • 22133 Steam and Air-Conditioning Supply

Business knowledge components classified at higher levels in a category should apply to multiple of its sub-categories. For instance, the standard  IEC 61850 is used by devices for electrical substation automation systems. This knowledge component should be catalogued within 22 utilities. It might be proper to assign it the code 2211 for electric power, or if it only pertains to distribution, the code 22112 may be a more accurate assignment.

The CIP also has a topology for classifications that are subdivided. Using this topology, knowledge components should be classified at the appropriate level in the same manner as the NAIC codes. Higher level codes are for knowledge components that apply to multiple sub-categories.

The OtterServer enforces the tree structure by requiring that documents within a knowledge component can only have a single classification code, and can only import documents from the same layer or from a lower level layer. In other words, a business document can import another business document, an academia document, or an infrastructure document. An academia document can only import another academia document or an infrastructure document.

The academia documents provide the principles of knowledge. Most of these will not change significantly over time, although there may be new or extended versions. Businesses will change and should be dependent on the proven principles of knowledge found in academia. Any business document not dependent upon academic disciplines can be viewed as having no principles.

The Tree of Knowledge, like a library, provides the means for locating shared knowledge, and the OtterServer provides the means to utilize that knowledge. Due to the consistency of descriptive logic languages and the OtterServer framework, all knowledge components are homogeneous and inter-operable.

REA Business Model

The OtterServer includes a simulation of three pizza stores, each making pizzas from supplies ordered from three different sources. The REA (Resource Event Agent) business model, proposed in 1982 by William E. McCarthy, is applied in the form of two xontos (Executable Ontologies Defined in a Descriptive Logic Language): REA Exchange and REA Convert. These xontos provide the definitions and services required to run the simulation.

Model-Driven Design Using Business Patterns, an excellent book by Pavel Hruby, was the source for constructing the models used in the simulation. This book provides patterns that can be applied to many business processes.

The “Exchange” xonto provides the means to describe the contracts for selling pizzas and acquiring the ingredients to make the pizzas. This xonto has the following structure:

The simulation has two forms of the exchange model. One form is to define the exchange between a customer and a store:

Three contracts are defined, one for each store. When a pizza is purchased by a customer, the store pizza resource is decremented and the price of the pizza is paid for by the customer. The income from each purchase goes into the appropriate revenue accounts. This is triggered by the service “Order Pizza from Menu”.

The other application of the exchange model happens between the stores and their suppliers. Since there are three stores and three suppliers, the simulation includes a total of nine store and supplier contracts as shown below:

The stores restock the ingredients they need by ordering from their suppliers. Every order includes the number of units of each ingredient required by the store. Each store’s pizza ingredients are incremented and the appropriate expense account is decremented. This event is triggered by the service “Restock Ingredients”.

The REA “Convert” xonto is used to make pizzas. The general structure of this xonto is:

When pizzas are ordered by customers, this model is applied to make each pizza:

The service “Order Pizza” performs the convert for a specific pizza according to that pizza’s recipe. For each pizza produced, the pizza resource is increased and the ingredients used to make the pizza are decreased.

The simulation operates over several days. At the beginning of each day, each store’s ingredients are checked by the “Restock Ingredients” service to see if they fall below the economic order quantity. If so, an order is prepared for the suppliers to restock to a level that does not exceed the store’s storage capacity. The simulation then randomly creates specific pizza orders for the stores.

The viewer for the simulation slows down the process using time delays, allowing the details of the revenue and expenses incurred each day to be seen.

The REA models work perfectly as xontos, and sincere thanks are given to Pavel Hruby for describing these patterns. See

Form Follows Function

Perhaps the most famous quote from Frank Lloyd Wright, the master architect, was:

“Form Follows Function.”

The value of that philosophy is obvious when it comes to building homes and business structures, and it should be just as obvious when building information systems. In the OTTER Project, the intended function is to establish:

Ontology-driven Enterprise Architecture.

The OtterServer is the form that follows. The overall function of ontology-driven Enterprise Architecture can be subdivided into smaller, more specific functions. Those functions and their form in the OtterServer Structure are outlined in the following:

Function Form Follows
Blackbox components Ontologies
o   Ontology definition OWL
o   Consistency General reasoners
Layered knowledge OWL imports
o   Layer definitions Standard education & business categories
o   Layer dependency Standard crosswalks of education & business categories
o   Infrastructure layer Ontologies for standard architecture
Business processing
Service Component Architecture (SCA)
o   Standard connections SCA ontology sockets & plugs
o   Service access Class expression messages
o   Access security Socket access through IAM
o   Business process execution BPEL ontology
Information persistence & transformation Service Data Objects (SDO)
o   Information access OWL class expressions
o   Reference data persistence Ontology individuals
o   Master data persistence External storage
o   Transformation SDO update
Visualization Graphical presentation
o   Ontologies D3.js diagrams
o   SCA Graphviz graphs
Development / Change management Protégé editor

Some might read this list and think it’s incomplete, since it doesn’t include meeting a business need. That function would entail gathering requirements from stakeholders and the building or buying of a system appropriate to their needs. And viewed from the traditional approach to software engineering, this list is incomplete. However, this list is not about what is to be constructed, but rather, how it is to be constructed.

The ontology-driven approach is very different from the traditional when using a framework like the OtterServer. New construction doesn’t begin with a clean slate. It begins by understanding and utilizing the knowledge that has been captured by other professionals. When a common framework is used, all of the ontologies will work together.

The OtterServer supports the capture and reuse of knowledge in academic disciplines, business, and infrastructure. With that stored knowledge, anything new can be built upon proven ontologies. And having a common framework will result in a major leap in our ability to use software to handle greater and more complex systems. Once we have quality computing systems that rely upon our accumulated and proven knowledge, they will far exceed the capabilities of the localized and limited applications of today.