top of page
Image by alexandru vicol

Winter 2023

Image 9-25-23 at 3.19 PM.jpeg

A modular mind

The multi-layered, multi-nodal AI mind

Layer 1 


As proposed by Dehgani in the text, Data Mesh (ISBN: 9781492092391) include several challenges such as proper authentication and the ability to generate python code in real time. Due to the modular nature of the ai and all its layers, evolution among 1 or more layers will be a staged process in terms of its adoption in the technology. i.e as newer ai models are built, models can evolve beyond transformer based models. All such models can be used as a layer 1 due to it's modular design.


With the proposed design, a team of researchers would carefully be able to move staging technology to production by assessing it’s performance across various domains, while not compromising any of the critical processes that the technology would be enabling through live models. 

Layer 2

Layer two contains an API dictionary with the authorization and data stores across cloud and edge. The cloud will be used for public meshes whereas the edge DBs and devices will be used for personal meshes. This idea is similar to the one proposed in several conceptions of ubiquitous AI dating back to the book Visions by Michio Kaku and the sources cited in the futurist work.  



In the current iteration, the API dictionary is a dynamically updating google sheet comprising of all APIs across all form factors, including machines like MRIs, and meta objects like non fungible tokens, virtual machines, etc.  


Softtrust IT is leading the creation of the API dictionary through both manual and web-scrapping projects. The current iteration used the following code to aggregate data and add it to a dynamically updating and callable google sheet to the layer two of the model. The code, along with the dictionary, evolves in real time as part of CI/CD development. Newer versions of the API dictionary will be integrated on a model by model basis.  

Layer 3

The current model approaches the issue using the current model’s ability to generate code for natural language commands like ex. Show me my average spending across my bank accounts, What are my top played songs across all my channels, and check for the lowest price available to purchase power across multiple sub-generators.  


These form the current training dataset for the real time API constructor. Further testing is being performed alongside Independent contributors, Softtrust India, and other global collaborators.   


The exhibit code is an MVP of the concept. It calls the APIs based on the tokens extracted from a given text. Companies like mule-soft and Microsoft provide low-code and no-code solutions, but those products require the development of APIs per enterprise use cases. This technology however, solved that issue through its ability to 


  1.  Use nlp extracted tokens and call the relevant APIs from the layer 2 API dictionary 

  2. Generate and execute python code based on a given natural language command 

  3. Use layer 4 as a way to learn from trained datasets and real-time unsupervised machine learning datasets.  

Layer 4

The idea of a single global AI that is capable of being everything to everyone is decades away but this technology aims to play a critical role on providing several aspects critical to its capabilities including ubiquity (access from multiple sources and to multiple endpoints), centralized knowledge banks, heuristic, and self-learning capabilities that are able to fine tune results based on memory as well as real-time confidence assessments. Similar to a brain.


Another model - TExtron was developed and published in the paper “A qualitative approach to text sentiment analytics - an alternative to the bag of words model” at Indiana University as a publication under the shoemaker innovation center for computer science. Anoop Jain (Purdue, Microsoft) and I (Indiana University, Microsoft) partnered on the following project Textron, where Jain and I worked on a model based on the valance-arousal-dominance matrix and experimented introducing two poles, similar to the +1 and -1 scale used by the bag of words model popular at the time, which is still used in some transformer based applications and many sentiment analytics detection software today.  

This technology is a precursor that informed the choices on training and testing the current self-learning model, specifically, it's ranking method which is suitable for real-time learning. The model tested at 91% on each dimension and around 67% when all three were tested across the training dataset of 20,000 comments from Amazon product reviews. The model was trained on a lexical collection of words used in all literary sources from the 16th century - 2007. The project is currently open sourced under the MIT License.  



The same model is proposed to be the base for the heuristic (logical) learning section of the bi-cameral brain at the current topmost level. Though it borrows from the theory of mind, it does not replicate human thought. Future versions of the AI will incorporate overlays on top of layer 4, as our understanding of the human brain matures alongside AI.  

Source: Pub. Real time API builder for intelligent AI self-learning agent

abstract for USPTO Utility Patent US 18234352 (Patent pending)

Let's collaborate

Sign up to get the latest on meshAI. All updates are penned by a person, not a bot. No spam here!

  • Facebook
  • Twitter
  • LinkedIn
  • Instagram

Thanks for submitting!

bottom of page