This is the most “technical” part of the discussion spread across few pages, as this post is focused on describing how it was designed.
The idea was to create a structure to be used to describe information, store it online on an infrastructure managed by a third party (customer, outsourcing company, partner, etc), and allow others to add, remove, modify, view information, but not necessarily enabling the “host” to have access to content and structure of the information.
This was done also because, beside spreading the same database on few sites, the design was used for various applications, including a few that were supposed to evolve into packages.
And actually used were used as tools to be able to follow multiple projects at the same time- same software solution to support different management consulting needs, and both resulting in documents, not software or service management).
The simplest examples was a magazine containing issues containing sections containing articles containing paragraphs.
Then, relationships (“links”) between paragraphs or a paragraph and an article, or at and level up the hierarchy.
Any bit of information was associated therefore to an element within a “taxonomy”, i.e. a “category”.
And each category had possibly one or more “connections” to qualify its use.
The same design (and database, but not the software components) was used for small business communities using also opensource components as part of the design.
Imagine e.g. versions of a contract- in my case derived from the experience of having to prepare countless proposals that weren’t simply a recycled “license agreement”, as it would be for the sale of equipment, or off-the-shelf software, or technical and professional services sold by the typical “15 minutes slot”, but a kind of Bill of Materials for consulting delivering… Word or Acrobat documents (and the occasional Excel or database)- not software.
Creating the conceptual data architecture As the idea was to cover most of the applications that were part of my and my network routine, I had to find a general solution.
Examples of my (real life) uses? An online magazine, building proposals, producing the documents resulting from some consulting services such as feasibility or organizational studies, etc.
The first step was to start from a sample of real cases, to idenfity the list of categories (“datatypes”) needed: unbundling activities and results.
The second step required looking at how many connections between datatypes were needed in each case, and if other data storage needs were to be met (e.g. to associate specific activities to specific data categories).
The third step was to summarize that into a “generic” set of tables able to cover what I had found in the first and second step, along with other potential uses.
The fourth step was to study and create the “methods” to store offline the information within the structure, and then publish that structure online.
The fifth step was to study and create the methods to allow end-users access via the Web to retrieve information (i.e. software components that could read those “generic tables”, and, based upon the application, decode and present the results: web page, acrobat file, etc).
Deploying the architecture It wasn’t really that complex: it was a matter of releasing either executables (on PCs) or scripting components (online), the latter destructured in such a way to make at least difficult to modify and follow the logic (“obfuscate”- of course, this implied developing through “environments”, as the encrypted online version was compressed in such a way as to be understandable for machines, but too cumbersome to be routinely modified by humans).
As for the data architecture, the DDL for the tables was focused on a “standard” relational database, so accesses were via standard SQL, and software components were used to carry out three main activities:
- CONVERT content description into XML, to be stored in a large “content table” that was conceptually similar to a data lake, i.e. a prefix stating the datatype, few columns to allow partitioning or separating by domain, then a field containing the XML converted (“serialization”), and fields to timestamp creation and modification (and associated users)
- ENCODE AND STORE all the information before storing it; on purpose, I selected a weak method and anyway if needed the host had access to the “key text” , and offline applications had encryption schemes only if the executable was delivered to a third party
- RETRIEVE, DECODE, PRESENT information when needed.
To summarize The easiest way to discuss it is to consider its components, as outlined in the following picture:
As you can see, nothing conceptually complex: just a way to optimize activities, with few benefits that I will let you to evaluate.
Follow the menu options on top.
You can contact me on Linkedin.com