logo Jabes

Design Data - Information flow


📚 data logic types Information Frames data tech flows 📚

👐 C-Steer C-Serve C-Shape 👁 I-C6isr I-Jabes I-Know👐
👐 r-steer r-serve r-shape 👁 r-c6isr r-Jabes r-know👐

🔰 Contents Frame-ref ZarfTopo ZarfRegu SmartSystem ReLearn 🔰
  
🚧  Knowium P&S-ISFlw P&S-ISMtr P&S-Pltfrm Fractals Learn-I 🚧
  
🎯 Know_npk Gestium Stravity Human-cap Evo-InfoAge Learn-@2 🎯


RN-1 The classic technological perspective for ICT


RN-1.1 Contents

RN-1.1.1 Looking forward - paths by seeing directions
A reference frame in mediation innovation
details systems life  shift logframe back devops bpmc devops bianl data infotypes logframe  technology logframe When the image link fails, 🔰 click here for the most logical higher fractal in a shifting frame.
Contexts:
r-serve technology enablement for purposes
r-steer motivation purposes by business
r-shape mediation communication
data infotypes
data techflows
There is a counterpart 💠 click here for the impracticable diagonal shift to shaping change.


The Fractal focus for knowledge management
The impracticable diagonal is connecting the technology realisation to a demand from administrative support. There is no: The shape mindset mediation innovation:
Understanding sentences for "the problem" and "the purpose" is requiring understanding the grammar that define sentences. So we need the understand more technical what the language in systems is.
Seven distinctions define the invariant operators of sense-making and organization. Purpose (POSIWID) and "the problem" are not additional distinctions, but emergent constructs produced when these operators are enacted through recurring 3*3 sense- act- reflect patterns across scales.Combining:
  1. For the grammar we end up in 6-7 distinctions although we are not aware of those.
  2. In the grammar there are several perspectives of disinctions types for different purposes
  3. Purpose (POSIWID) and "the problem" do not exist independently; they are constructed through the interaction of the 7 distinctions.
  4. The 3*3 forms the sentence, express:
    • Horizontal: Sense, Act, Reflect
    • Vertical : Context, Process, Outcome
Information processing applying grammar for using sentences, the third wave
  1. The operators are scale-free
  2. The 3*3 is a projection
  3. The loop creates meaning
  4. Meaning retroactively defines purpose and problem
The common challenge with all complexities is that this is full of dualities - dichotomies.
The serve mindset technology realisation:

RN-1.1.2 Local content
Reference Squad Abbrevation
RN-1 The classic technological perspective for ICT
RN-1.1 Contents contents Contents
RN-1.1.1 Looking forward - paths by seeing directions
RN-1.1.2 Local content
RN-1.1.3 Guide reading this page
RN-1.1.4 Progress
RN-1.2 Knowledge shoulders for the 6x6 RFW bsiarflw_02 Frame-ref
RN-1.2.1 ....................................... right questions
RN-1.2.2 .......................................nd replies
RN-1.2.3 ......................................., fame, honor
RN-1.2.4 .......................................d dichotomies
RN-1.3 Augmented axioms: Anatomy Physiology ZARF bsiarflw_03 ZarfTopo
RN-1.3.1 ............................................ame
RN-1.3.2 ............................................imensions
RN-1.3.3 ............................................ons
RN-1.3.4 ............................................ations
RN-1.4 Augmented axioms: Neurology Sociology ZARF bsiarflw_04 ZarfRegu
RN-1.4.1 ..................................s
RN-1.4.2 ..................................ology: 1* dimensions
RN-1.4.3 .................................. & implications
RN-1.4.4 .................................. & implications
RN-1.5 Insight for intelligence in viable systems bsiarflw_05 SmartSystem
RN-1.5.1 ........................................xt
RN-1.5.2 ........................................ good regulator
RN-1.5.3 ........................................-abstraction
RN-1.5.4 ........................................ns to clear
RN-1.6 Learning systems maturity from 6x6 RFW's bsiarflw_06 ReLearn
RN-1.6.1 ............................................. model
RN-1.6.2 .............................................res
RN-1.6.3 .............................................ment
RN-1.6.4 ............................................. crisis
RN-2 The impact of uncertainty to information processing
RN-2.1 Reframing the thinking for decision making bsiarsys_01 Knowium
RN-2.1.1 Distinctions containing tensions in grammar
RN-2.1.2 Using DTF as one of the perspectives aside Zarf Jabes etc.
RN-2.1.3 Reframing the SIAR model sing dialectal abstractions
RN-2.1.4 Diagnosing dialectal the broken system in decision making
RN-2.2 A new path in thinking - reflections bsiarsys_02 P&S-ISFlw
RN-2.2.1 Understanding of options in the many confusing AI types
RN-2.2.2 Asking not only results (appeasing) but also the reasoning
RN-2.2.3 Asking for the reasoning in adjusted 3*3 frames
RN-2.2.4 The challenge: "From Tension to Direction"
RN-2.3 Purposeful usage of dialectal thoughts bsiarsys_03 P&S-ISMtr
RN-2.3.1 Underpinning nominal limit in distinctions at a dimension
RN-2.3.2 Thinking dialectical on how to define "the problem"
RN-2.3.3 The role of certainty in systems, TOC: first order
RN-2.3.4 The role of certainty in systems, SD: second order
RN-2.4 Becoming of identities transformational relations bsiarsys_04 P&S-Pltfrm
RN-2.4.1 Communities of practice - collective intelligence
RN-2.4.2 The challenge in building up relationships
RN-2.4.3 A practical case for understanding DTF impact
RN-2.4.4 ....................................................ons
RN-2.5 Closing the loop using dialectical thinking bsiarsys_05 Fractals
RN-2.5.1 DTF Alignment to the 6x6 reference frame & Jabes
RN-2.5.2 Common pathologies in DTF completeness
RN-2.5.3 Common struggles achieving DTF completeness
RN-2.5.4 The T-forms challenge activating change
RN-2.6 Evaluating system dialectical thinking bsiarsys_06 Learn-I
RN-2.6.1 What legitimately can be done with DTF using texts
RN-2.6.2 Using a mindset with graphs in understanding thought forms
RN-2.6.3 Governance boundaries in complex & chaotic systems
RN-2.6.4 System execution boundaries and moving boundaries
RN-3 The three different time consolidation perspectives
RN-3.1 Using the understanding continuum practical siaragil_01 Know_npk
RN-3.1.1 ....................................................rns
RN-3.1.2 .................................................... shifts
RN-3.1.3 ....................................................s
RN-3.1.4 ....................................................s
RN-3.2 Using the emergence pragnanz gestalt siaragil_02 Gestium
RN-3.2.1 ....................................................tterns
RN-3.2.2 ....................................................actions
RN-3.2.3 ....................................................tions
RN-3.2.4 ....................................................ents
RN-3.3 Using the "center of gravity" in value streams siaragil_03 Stravity
RN-3.3.1 .................................................patterns
RN-3.3.2 .................................................ons
RN-3.3.3 .................................................ions
RN-3.3.4 .................................................nts
RN-3.4 Human Capital in systems for capabilities siaragil_04 Human-cap
RN-3.4.1 ................................................
RN-3.4.2 ................................................implify
RN-3.4.3 ................................................ractals
RN-3.4.4 ................................................efs
RN-3.5 Changing systems information age C&C siaragil_05 Evo-InfoAge
RN-3.5.1 ..........................................pes
RN-3.5.2 ..........................................mergent types
RN-3.5.3 ..........................................vations
RN-3.5.4 ..........................................in systems
RN-3.6 Touching transcendental boundaries in learning siaragil_06 Learn-@2
RN-3.6.1 ............................................... a whole?
RN-3.6.2 ...............................................tomy
RN-3.6.3 Becoming the opposite of what was intended
RN-3.6.4 ...............................................ystems

RN-1.1.3 Guide reading this page
The quest for methodlogies and practices
This page is about a mindset framework for understanding and managing complex systems. The type of complex systems that is focussed on are the ones were humans are part of the systems and build the systems they are part of.
The phase shift from classic linear and binary thinking into non-linear dialectal is brought to completion in aliging the counterpart of this page. A key concept is "dialectal closure", words that are not understandable without a simple explanation.
👁 💡 Dialectical closure means: When closure is reached: It does not mean:
Steering Closure Skipped to binary
Look ahead ➡ where am I going? only looking ahead ➡ fantasy
Look around ➡ what is happening now? only looking around ➡ drifting
Look back ➡ did my last move work? only looking back ➡ paralysis

📚 It means the picture is whole enough to act responsibly.
Using the 3*3 matrix the cycle as the flow.
in 3*3 terms any is missing:
Problem is seen (Context * Sense) no real learning occurs
Execution happens (Process * Act) decisions feel arbitrary
Purpose is reflected (Outcome * Reflect) people get confused or resist

🎭 Dialectical closure is when all three views are taken together before deciding the next move.
Without closure: frameworks feel abstract, discussions go in circles, people talk past each other
With closure: disagreements become productive, roles become clear, action becomes legitimate.
Dialectical closure is reached when context, action, and consequences are considered together, allowing meaningful action without ignoring tensions.
This is far from a technology-tools mindset but it is very well possible to treat it as technology-relationship mindset. Seeing it is relationship there are approaches in Science, technology, engineering, and mathematics (STEM) that enable to handle those.
Shaping Systems collective intelligence
These are part of a larger vision of adaptive, resilient enterprises, organisations. The mindset is even exceeding that of what is seen as an enterprise to the communities enterprises are part of.
Sys6x6Lean and Shape Design for ICT Systems Thinking form a unified framework for adaptive enterprises. Combining Lean processes, Zachman reference models, mediation, and innovation, these pages guide organizations in shaping resilient systems for complex environments. There is special impracticable fractal the demand is at "C-Shape design" the realisation at "r-serve devops sdlc" From the C-Shape location:
: 👉🏾 Sys6x6Lean page: focuses on systems thinking, Lean, viable systems modeling.
Shape Design for ICT Systems Thinking page: focuses on mediation, innovation, ICT organizational frameworks.
From the r-serve location:
: 👉🏾 Valuestream page: focuses on systems thinking, Lean, viable systems modeling.
Serve Devops for ICT Systems realisations page: focuses on practical innovations ICT organizational frameworks.

A recurring parable for methodlogies and practices
Key challenges: Achieving Cross Border Government Innovation (researchgate Oecd opsi, foreword Geof Mulgan 2021 - collective intelligence)
OPSI is a global forum for public sector innovation. In a time of increasing complexity, rapidly changing demands and considerable fiscal pressures, governments need to understand, test and embed new ways of doing things.
Over the last few decades innovation in the public sector has entered the mainstream in the process becoming better organised, better funded and better understood. But such acceptance of innovation has also brought complications, in particularly regarding the scope of the challenges facing innovators, many of which extend across borders. Solutions designed to meet the needs of a single country are likely to be sub-optimal when applied to broader contexts. To address this issue, innovators need to learn from others facing similar challenges and, where possible, pool resources, data and capacities.
OPSI's colleagues in the OECD Policy Coherence for Sustainable Development Goals division (PCSDG) and the EC Joint Research Centre have developed a conceptual framework for analysing transboundary interrelationships in the context of the 2030 Agenda.
OPSI and the MBRCGI have observed an increased focus on cross-border challenge-driven research and innovation, with a particularly strong influence from agendas such as the SDGs.
A second challenge is how to institutionalise this work. It is not too difficult to engage people in consultations across borders, and not all that hard to connect innovators through clubs and networks. But transforming engagement into action can be trickier.
It is particularly hard to share data‚ especially if it includes personal identifiers (although in the future more "synthetic data" that mirrors actual data without any such identifiers may be more commonly used, particularly for collaborative projects in fields such as transport, health or education). It is also hard to get multiple governments to agree to create joint budgets, collaborative teams and shared accountability, even though these are often prerequisites to achieving significant impacts.
OPSI double four
RN-1.1.4 Progress
done and currently working on:

The topics that are unique on this page
👉🏾 Rules Axioms for the Zachman augmented reference framework (ZARF). 👉🏾 Connecting ZARF to systems thinking in the analogy of: 👉🏾 Explaining the patterns that are repeating seen in this.
👉🏾 use cases using the patterns for Zarf and by Zarf. Highly related in the domain context for information processing are:
open design_bianl:
workcell
valuestream
open design_sdlc :
DTAP Multiple dimensions processes by layers
ALC type 2 low code ML process development
ALC type 3 low code ML process development
vmap_layers01 low code ML process development
data administration *meta describing modelling data
Security *meta - modelling access information
meta data model
meta data process
meta secure
open local devops_sdlc:
prtfl_c22
prtfl_t33
relmg_c66
relmg_t46

RN-1.2 Technical requirements for knowledge systems

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RN-1.2.1
Archiving, Retention policies.
Information is not only active operational but also historical what has happened, who has execute, what was delivered, when was the delivery when was the purchase etc. That kind of information is often very valuable but at the same time it is not well clear how to organize that and who is responsible.
💣 Retention policies, archiving information is important do it well, the financial and legal advantages are not that obvious visible. Only when problems are escalating to high levels it is clear but too late to solve. When being in some financial troubles, cost cutting is easily done.
Historical and scientific purposes, moved out off any organisational process.
An archive is an accumulation of historical records in any media or the physical facility in which they are located. Archives contain primary source documents that have accumulated over the course of an individual or organization's lifetime, and are kept to show the function of that person or organization. Professional archivists and historians generally understand archives to be records that have been naturally and necessarily generated as a product of regular legal, commercial, administrative, or social activities.
The word record and word document is having a slightly different meaning in this context than technical ICT staff is used to.
In general, archives consist of records that have been selected for permanent or long-term preservation on grounds of their enduring cultural, historical, or evidentiary value. Archival records are normally unpublished and almost always unique, unlike books or magazines of which many identical copies may exist. This means that archives are quite distinct from libraries with regard to their functions and organization, although archival collections can often be found within library buildings.

Additional information container attributes.
😉 EDW 3.0 Every information container must be fully identifiable. Minimal by: When there are compliancy questions on information wiht this kind of compliancy questions it is often assumed to be an ICT problem only. Classic applications are lacking thes kind of attributes with information.
compliancy_sdlc.jpg 💡 Additional information container attributes supporting implementations defined retention policies. Every information container must have for applicable retention references :
Common issues when working for retention periods.
An isolated archive system in complexity reliability and availability being a big hurdle, high impact.
Relevant information for legal purposes, moved out from manufacturing process and not being available anymore in legal cases, is problematic.
Impact by cleaning as soon as possible is having high impact. The GDPR states it should be deleted as soon as possible. This law is getting much attention and is having regulators. Archiving information for longer periods is not directly covered by laws, only indirect.

compliancy_bpmbia.jpg
Government Information Retention.
Instead of a fight how it should be solved there is a fight somebody else is to blame for missing information. This has nothing to do with hard facts but everything with things like my turf and your fault. Different responsible parties have their own opinion how conflict in retention policies should get solved.
🤔 Having information deleted permanent there is no way to recover when that decision is wrong.
🤔 The expectation it would be cheaper and having better quality is a promise without warrrants.

RN-1.2.2 Technology safe by design & open exchangeable
Business Continuity.
Loss of assets can disable an organisation to function. It is risk analysis to what level continuity, in what time, at what cost, is required and what kind of loss is acceptable. 💣 BCM is risk based having visible cost for needed implementations but not visible advantages or profits. There are several layers
Procedures , organisational.
People , personal.
Products, physical & cyber.
Communications.
Hardware.
Software.

Loss of physical office & datacentre.
In the early days using computers all was located close to the office with all users because the technical communication lines did not allow long distances. Using batch processing with a day or longer to see results on hard copy prints. Limited Terminal usage needing copper wires in connections.
etl-elt_01.png The disaster recovery plan was based on a relocation of the office with all users and the data centre when needed in case of a total loss (disaster).
For business applications a dedicate backup for each of them aside of the needed infrastructure software including the tools(applications).
The period to resilence could easily span several weeks, there was no great dependency yes on computer technology. Payments for example did not have any dependency in the 70´s.

Loss of network connections.
The datacentre has got relocated with the increased telecommunications capacity. A hot stand by with the same information on a Realtime duplicated storage made possible.
etl-elt_01.png The cost argument with this new option resulted in ingorance of resilence of other type of disasters to recover and ignorance of archiving compliancy requirements.
With a distributed approach of datacenters the loss of single datacentre is not a valid scenario anymore. Having services spread over locations the isolated DR test of a having one location failing is not having the value as before.

Loss control to critical information.
Loss of information, software tools compromised, database storage compromised, is the new scenario when everything has become accessible using communications. Just losing the control to hackers being taken into ransom or having data information leaked unwanted externally is far more likely and more common than previous disaster scenarios.
Swiss_cheese_model.png Not everything is possible to prevent. Some events are too difficult or costly to prevent. Rrisk based evaluation on how to resilence.
Loss of data integrity - business.
Loss of confidentiality - information.
Robustness failing - single point of failures.
The Swiss cheese model of accident causation is a model used in risk analysis and risk management, including aviation safety, engineering, healthcare, emergency service organizations, and as the principle behind layered security, as used in computer security and defense in depth. Therefore, in theory, lapses and weaknesses in one defense do not allow a risk to materialize, since other defenses also exist, to prevent a single point of failure. Although the Swiss cheese model is respected and considered to be a useful method of relating concepts, it has been subject to criticism that it is used too broadly, and without enough other models or support.

Several triads of components.
Eliminating single points of failure in a backup (restore) strategy. Only the proof of a successful recovery is a valid checkpoint. 3-2-1 backup rules , the 3-2-1 backup strategy is made up of three rules, they are as follows:
  1. Three copies of data- This includes the original data and at least two backups.
  2. Two different storage types- Both copies of the backed up data should be kept on two separate storage types to minimize the chance of failure. Storage types could include an internal hard drive, external hard drive, removable storage drive or cloud backup environment.
  3. One copy offsite- At least one data copy should be stored in an offsite or remote location to ensure that natural or geographical disasters cannot affect all data copies.
wikepedia_informationsecurity.png
BCM is related to information security. It are the same basic components and same shared goals.
An organization´s resistance to failure is "the ability ... to withstand changes in its environment and still function". Often called resilience, it is a capability that enables organizations to either endure environmental changes without having to permanently adapt, or the organization is forced to adapt a new way of working that better suits the new environmental conditions.
image:By I, JohnManuel, CC BY-SA 3.0
Auditing monitoring.
For legal requirements there are standards by auditors. When they follow their checklist a list of &best practices"e are verified. The difference with "good practice" is the continous improvement (PDCA) cycle.
Procedures , organisational.
People , personal.
Products, physical & cyber.
Security Operations Center.
Infrastructure building blocks- DevOps.
Auditing & informing management.

Audit procedure processing.
The situation was: Infrastructure building blocks- DevOps Leading. Auditing and informing management on implementations added for control.
Added is: Security Operations Centre, leading for evaluating security risk. Auditing and informing management on implementations added for control.
 
The ancient situation was: Application program coding was mainly done in house. This had changed into using public and commercial retrieved software when possible.
Instead of having a software crisis in lines of code not being understood (business rules dependency). It has changed in used software libraries not being understood (vulnerabilities) and not understood how to control them by the huge number of used copied software libraries.
Instead of having only an simple infrastructure stack to evaluate it has become a complicated infrastructure stack with an additional involved party into a triad to manage.
 
Penetration testing, also called pen testing or ethical hacking, is the practice of testing a computer system, network or web application to find security vulnerabilities that an attacker could exploit. Penetration testing can be automated with software applications or performed manually. Either way, the process involves gathering information about the target before the test, identifying possible entry points, attempting to break in -- either virtually or for real -- and reporting back the findings.
It will only notify what is visible to the tester, using tools only what is commonly known. There is nog warrant that it is not vulnerable after "ecorrections" are made. It is well posible there is no security risk at all by the way the system is used and being managed.
RN-1.2.3 Standard understandable naming conventions meta
logging monitoring.
Logging events when processing information is generating new information. The goal in using those logging informations has several goals. Some loginformation is related to the product and could also become new operational information.
💣 When there are different goals an additional copy of the information is an option but introduces an option of integrity mismatches.
Data classification.
Information security
The CIA triad of confidentiality, integrity, and availability is at the heart of information security. (The members of the classic InfoSec triad confidentiality, integrity and availability are interchangeably referred to in the literature as security attributes, properties, security goals, fundamental aspects, information criteria, critical information characteristics and basic building blocks.) However, debate continues about whether or not this CIA triad is sufficient to address rapidly changing technology and business requirements, with recommendations to consider expanding on the intersections between availability and confidentiality, as well as the relationship between security and privacy. Other principles such as "accountability" have sometimes been proposed; it has been pointed out that issues such as non-repudiation do not fit well within the three core concepts.
😉 Two additionals are: Negelected attentions points: An important aspect of information security and risk management is recognizing the value of information and defining appropriate procedures and protection requirements for the information. Not all information is equal and so not all information requires the same degree of protection. This requires information to be assigned a security classification.
Classified information
When labelling information in a categories an approach is:
  1. Public / unclassified
  2. Confidential, intended for circulation in the internal organisation and authorized third parties at owners discretion.
  3. Restricted, information that should not into disclosure outside a defined group.
  4. Secret, strategical sensitive information only shared between a few individuals.

etl-elt_01.png
Using BI analytics
Using BI analytics in the security operations centre (SOC).
This technical environment of bi usage is relative new. It is demanding in a very good runtime performance with well defined isolated and secured data. There are some caveats:
Monitoring events, ids, may not be mixed with changing access rights.
Limited insight at security design. Insight on granted rights is done.
It is called
Security information and event management (SIEM)
is a subsection within the field of computer security, where software products and services combine security information management (SIM) and security event management (SEM). They provide real-time analysis of security alerts generated by applications and network hardware. Vendors sell SIEM as software, as appliances, or as managed services; these products are also used to log security data and generate reports for compliance purposes.

etl-elt_01.png Using BI analytics for capacity and system performance.
This technical environment of bi usage is relative old optimizing the technical system performing better. Defining containers for processes and implementing a security design.
Monitoring systems for performance is bypassed when the cost is felt too high.
Defining and implementing an usable agile security design is hard work.
Getting the security model and monitoring for security purposes is a new challenge.
It is part of ITSM (IT Service maangemetn) Capacity management´s
primary goal is to ensure that information technology resources are right-sized to meet current and future business requirements in a cost-effective manner. One common interpretation of capacity management is described in the ITIL framework. ITIL version 3 views capacity management as comprising three sub-processes: business capacity management, service capacity management, and component capacity management.
In the fields of information technology (IT) and systems management, IT operations analytics (ITOA) is an approach or method to retrieve, analyze, and report data for IT operations. ITOA may apply big data analytics to large datasets to produce business insights.


Loss of confidentiality. compromised information.
getting hacked having got compromised by whale phishing is getting a lot of attention.
A whaling attack, also known as whaling phishing or a whaling phishing attack, is a specific type of phishing attack that targets high-profile employees, such as the CEO or CFO, in order to steal sensitive information from a company. In many whaling phishing attacks, the attacker's goal is to manipulate the victim into authorizing high-value wire transfers to the attacker.

Government Organisation Integrity.
This has nothing to do with hard facts but everything with things like my turf and your fault. Different responsible parties have their own opinion how conflicts about logging information should get solved.
🤔 Having information deleted permanent there is no way to recover when that decision is wrong.
🤔 The expectation it would be cheaper and having better quality is a promise without warrrants.
🤔 Having no alignment between the silo´s there is a question on the version of the truth.

RN-1.2.4 Base temporal data structure following lifecycles
butics

RN-1.3 Classification of technical processing types

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RN-1.3.1 Info
DataWareHousing, Information flow based.
Repostioning the datawarehouseas part of an operational flow makes more sense. A compliancy gap getting a solution:
The two vertical lines are managing who´s has access to what kind of data, authorized by data owner, registered data consumers, monitored and controlled.
In the figure: In a figure:
df_csd01.jpg The following consumers are also valid for the warehouse: A very different approach in building up this enterprise information data ware house. Axiomas:
💡 No generic data model for relations between information elements - information containers.
💡 Every information container must be fully identifiable. Minimal by: 💡 Every information container must have a clear ownership.: For being fully identifiable a well designed stable naming convention is required.

Administrative Value Stream Mapping Symbol Patterns.
Help in abstracting ideas is not by long text but using symbols and figures. A blueprint is the old name for doing a design before realisation. What is missing is something in between that is helping in the value stream of administrative processing.
Input processing:
A well defined resource is one that can be represented in rows columns. The columns are identifiers for similar logical information in some context.
Execute Business Logic (score):
Logging: / Monitoring:
Output, delivery:

RN-1.3.2 Info
Administrative proposed standard pattern.
📚 The process split up in four stages of prepare request (IV, III) and the delivery (I, II). The warehouse as starting point (inbound) and end point (outbound).
The request with all necessary preparations and validations going through IV and III.
The delivery with all necessary quality checks going through I and II.
lean procesoriented single workstation adddwh

SDLC life cycle steps - logging , monitoring.
Going back to the sdlc product life, alc model type 3. This is a possible implementation of the manufacturing I, II phases. 💡 There are four lines of artefacts collections at releases what will become the different production versions.
  1. collecting input sources into a combined data model.
  2. modifying the combined data model into a new one suited for the application (model).
  3. running the application (model) on the adjusted suited data creating new information, results.
  4. Delivering verified results to an agreed destinationt in an agreed format.
SDLC life cycle steps - logging , monitoring 💡 There are two points that are validating the state en create additional logging. This is new information.
  1. After having collected the input sources, technical and logical verfication on what has is there is done.
  2. Before delviering the results technical and logical verfication on what is there is done.
This is logic having business rules. The goal is application logging and monitoring in business perspective. When something is badly wrong, than halting the process flow is safety mitigation preventing more damage.
There is no way to solve this by technical logfiles generated by tools like a RDBMS.
💡 The results ar collected archived (business dedicated). This is new information.
  1. After having created the result, but before delivering.
  2. It usefull for auditing purpused (what has happended) and for predcitive modelling (ML) .

RN-1.3.3 Info
df_dlv_alctp3.jpg
Applied Machine learning (AI), operations.
Analytics, Machine Learning, is changing the way of inventing rules to only human invented to helping humans with machines.
💡 The biggest change is the ALC type3 approach. This fundamentally changes the way how release management should be implemented. ML is exchanging some roles in coding and data to achieve results at development but not in other life cycle stages.
When only a research is done for a report being made only once, the long waiting on data deliveries of the old DWH 2.0 methodology is acceptable.
⚠ Having a (near) real time operational process the data has to be correct when the impact on the scoring is important. Using that approach, at least two data streams are needed: 🤔 The analytics AI ML machine learning has a duality in the logics definition. The modelling stage (develop) is using data, that data is not the same, although similar, as in the operational stage. Developing is done with operational production data. The sizing of this data can be much bigger than that of what is needed at operations due to the needed history. The way of developping is ALC type3.
 
❗ The results of what an operational model is generating should be well monitored for many reasons. That is new information to process.

RN-1.3.4 Info
The technical solutions as first process option.
Sometimes a simple paper note will do, sometimes an advanced new machine is needed. It depends on the situation. A simple solution avoiding the waste is lean - agile
archive documents nosql Optimization Transactional Data. An warehouse does not content structuring it must be able to locate the wanted content structured. Delivering the labelled containers efficient >
Optimization Transactional Data. The way of processing information was in the old day using flat files in the physical way. Still very structured stored and labelled. In the modern approach these techniques still are applicable although automated hidden in a RDBMS .
Analytics & reporting. The "NO SQL" hype is a revival of choosing more applicable techniques.
It is avoiding the transactional RDBMS approach as the single possible technical solution.

etl-reality.jpg
Information process oriented, Process flow.
The information process in an internal flow has many interactions input, transformations and output in flows.
There is no relationship to machines and networking. The problem to solve those interactions will popup at some point.
Issues by conversions in datatypes, validations in integrity when using segregated sources (machines) will popup at some point.

The service bus (SOA).
SD_enterpriseservicebus.jpg ESB enterprise service bus The technical connection for business applications is preferable done by a an enterprise service bus. The goal is normalized systems.
Changing replacing one system should not have any impact on others.

Microservice_Architecture.png
Microservices with api´s
Microservices (Chris Richardson):
Microservices - also known as the microservice architecture - is an architectural style that structures an application as a collection of services that are: The microservice architecture enables the continuous delivery/deployment of large, complex applications. It also enables an organization to evolve its technology stack.

Data in containers.
informatie_mdl_imkad11.jpg Data modelling using the relational or network concepts is based on basic elements (artefacts).
An information model can use more complex objects as artefacts. In the figure every object type has got different colours.
The information block is a single message describing complete states before and after a mutation of an object. The Life Cycle of a data object as new metainformation. Any artefact in the message following that metadata information.
This is making a way to process a chained block of information. It is not following the blockchain axioma´s. The real advantage of a chain of related information is detecting inter-relationships with the possible not logical or unintended effects.

olap_star01.jpg
Optimization OLTP processes.
The relational SQL DBMS replaced codasyl network databases (see math). The goal is simplification of online transaction processing (oltp) data by deduplication and normalization (techtarget) using DBMS systems supporting ACID ACID properties of transactions (IBM).
These approaches are necessary doing database updates with transactional systems. Using this type of DBMS for analytics (read-only) was not the intention.
normalization (techtarget, Margaret Rouse ) Database normalization is the process of organizing data into tables in such a way that the results of using the database are always unambiguous and as intended. Such normalization is intrinsic to relational database theory. It may have the effect of duplicating data within the database and often results in the creation of additional tables.
ACID properties of transactions (IBM)

RN-1.4 The connection of technology agile lean

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RN-1.4.1 Info
The Philosophy and Practicality of Jidoka
allaboutlean: The Three Fundamental Ways to Decouple Fluctuations Diving deep into the Toyota philosophy, you could see this as JIT telling you to let the material flow, and jidoka telling you when to stop the flow. This is a bit like the Chinese philosophical concept of Ying and Yang, where seemingly opposite or contrary forces may actually be complementary.
The same applies here. JIT encourages flow, and Jidoka encourages stops, which seems contrary. However, both help to produce more and better parts at a lower cost. Unfortunately, JIT gets much, much more attention as it is the glamorous and positive side, whereas jidoka is often seen as all about problems and stops and other negative aspects. Yet, both are necessary for a good production system.
💣 Ignoring the holistic view of the higher goal, only on a detailed aspect like JIT can make things worse not better.

project shop, moving the unmovable.
The project shop is associated with not possible applying lean thoughts. Does it or are there situations where new technology are implementing a lean working way.
allaboutlean projectshop - building ship
It is using a great invention of process improvement over and over again. That is: the dock. Building in the water is not possible. Building it ashore is giving the question how to get it into the water safely.
🔰 Reinvention of patterns.
Moving something that is unmovable.
Changing something that has alwaus be done tath wasy.

 Timelapse - Inschuiven tunneldeel A12 Minimizing time for road adjustment, placing tunnel. Placing it when able to move done in just 3 days. Building several months.
See time-lapse. 👓 Placing the tunnel was a success, a pity the intended road isn´t done after three years.
 
The project approach of moving the unmovable has been copied many times with the intended usage afterwards. rail bridge deck cover The approach is repeatable.
💡 Reinvention of patterns. Moving something that is unmovable.
🎭When a project shop is better in place, why not copy this at ICT?

Administration information flow.
Seeing this way of working the association is to administration work moving the papers arround.
Unstructured and Pulse Line Flow lines are often the best and most organized approach to establish a value stream.
The "easiest" one is an unstructured approach. The processes are still arranged in sequence; however, there is no fixed signal when to start processing a part.
kantoorttuin 💡 Reinvention of patterns. Using the information flow as assembly line.
🎭 When a flow line is a fit for an administrative process, why not copy this at ICT?
🎭 When an administrative process is associated to administrative tags (eg prodcut description) being processed why not have them related to each other?
Administrative process, differences to physical objects.
RN-1.4.2 Info
Change data - Transformations
Seeing the values stream within an administrative product is a different starting point for completely new approaches. The starting point is redesigning what is not working well. Not automatically keeping things doing as always have been done. Also not changing things because of wanting to change something.
Design thinking.
It is a common misconception that design thinking is new. Design has been practiced for ages: monuments, bridges, automobiles, subway systems are all end-products of design processes. Throughout history, good designers have applied a human-centric creative process to build meaningful and effective solutions.
BISL gap Business ICT The design thinking ideology is following several steps.
Defintion: The design thinking ideology asserts that a hands-on, user-centric approach to problem solving can lead to innovation, and innovation can lead to differentiation and a competitive advantage. This hands-on, user-centric approach is defined by the design thinking process and comprises 6 distinct phases, as defined and illustrated below.
See link at figure 👓.
 
Those six phases are in line with what the crisp-dm model states. Wat is missing when comparing this with the PDCA cycle is the Check- Verify of it works as expected after implementation.

many partitioned dws-s process cycle demo
Combining information connections between silos & layers.
💡 Solving gaps between silos in the organisation is supporting the values stream.
Having aligned information by involved parties it is avoiding different versions of the truth. It is more easy to consolidate that kind of information to a central managed (bi analytics) tactical - strategical level.
The change to achieve this is one of cultural attitudes. That is a top down strategical influence.

RN-1.4.3 Info
Tuning performance basics.
Solving performance problems requires understanding of the operating system and hardware. That architecture was set by von Neumann (see design-math).
vonNeumann_perftun01.jpg
A single CPU, limited Internal Memory and the external storage.
The time differences between those resources are in magnitudes (factor 100-1000).

Optimizing is balancing between choosing the best algorithm and the effort to achieve that algorithm.

vonNeumann_perftun02.jpg
That concept didn´t change. The advance in hardware made it affordable to ignore the knowledge of tuning.

The Free Lunch Is Over .
A Fundamental Turn Toward Concurrency in Software, By Herb Sutter.
If you haven´t done so already, now is the time to take a hard look at the design of your application, determine what operations are CPU-sensitive now or are likely to become so soon, and identify how those places could benefit from concurrency. Now is also the time for you and your team to grok concurrent programming´s requirements, pitfalls, styles, and idioms.

Additional component, the connection from machine, multiple cpu´s - several banks internal memory, to multiple external storage boxes by a network.

Perftun_EtL01.jpg
Tuning cpu - internal memory.
Minimize resource usage: ❗ The "balance line" algorithm is the best. A DBMS will do that when possible.

Perftun_EtL02.jpg
Network throughput.
Minimize delays, use parallelization:
⚠ Transport buffer size is a coöperation between remote server and local driver. The local optimal buffer size can be different. Resizing data in buffers a cause of performance problems.

Perftun_EtL03.jpg
Minize delays in the storage system.
⚠ Using Analtyics, tuning IO is quite different to transactional DBMS usage.
💣 This different non standard approach must be in scope with service management. The goal of sizing capacity is better understood than Striping for IO perfromance.

DBMS changing types
A mix of several DBMS are allowed in a EDWH 3.0. The speed of transport and retentionperiods are important considerations. Technical engineering for details and limitations to state of art and cost factors.
dbmsstems_types01.png
RN-1.4.4 Info
BISL Business Information Services Library.
Bisl is used for a demand supply chain. Often going along with internal business and external outsourcec IT services. Nice to see is a seperation of concerns in a similar way, placing the high level drivers in the center.
The framework describes a standard for processes within business information management at the strategy, management and operations level. BiSL is closely related to the ITIL and ASL framework, yet the main difference between these frameworks is that ITIL and ASL focus on the supply side of information (the purpose of an IT organisation), whereas BiSL focuses on the demand side (arising from the end-user organisation
Business Process The demand side focus for some supply is a solution for the supposed mismatch business & ICT. The approach for that mismatch is an external supplier.
Business Process Indeed there are gaps. The question should be is there are mismatch or have the wrong questions been asked?
In the values stream flow there are gaps between:
  1. operational processes, in the chain of the product transformation - delivery.
  2. Delivering strategical management information assuming the silo´s in the transformation chains -delivery are cooperating.
  3. Extracting, creating management information within the silo´s between their internal layers.

This has nothing to do with hard facts but everything with things like my turf and your fault. Different responsible parties have their own opinion how those conflict should get solved. The easy way is outsourcing the problem to an external party, a new viewpoint coming in.
🤔 The expectation this would be cheaper and having better quality is a promise without warrants .
🤔 Having no alignment between the silo´s there is a question on the version of the truth.

Business Process When these issues are the real questions real problems to solve:
  1. Solve the alignment between at operational processes, wiht the value stream of the product. Both parties need to agree as single version of the truth.
  2. Solve the alignment in extracting, creating management information within the silo´s between their internal layers. There are two lines of seperations in context.
  3. Use the management information wihtin the silos in consolidated information in delivering strategical management information.


RN-1.5 Closed loops, informing what is going on in the system

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RN-1.5.1 The EDWH - Data Lake - Data Mesh - EDWH 3.0
Classic DataWareHousing.
Processing objects, processing information goes along with responsibilities. There is an origin of the information and a consumer of combined information lines.
A data warehouse is at the moment siloed to reporting tasks. Reporting in dashboards and reports so managers are making up their mind with those reports as the "data".
Other usage for a data warehouse is seen as problematic when it used for operational informational questions may be involved with AI better Machine learning bypassing those managers as the decision makers. 👓  data-lake-to-data-marketplace
The technology question wat kind of DBMS should be uses in a monolithic system for management reporting is a strategy question asked.
Data curation before being used in a monolithic system for management reporting is a strategy question asked.
Historical information in this monolithic system for management reporting is a question.
Connecting to analytical usage in an operational flow in this monolithic system for management reporting is a question.

RN-1.5.2 Info
💡 Logistics of the EDWH - Data Lake. EDWH 3.0
As the goal of BI Analytics was delivering reports to managers, securing informations and runtime performance was not relevant.
Securing information is too often an omission.
Transforming data should be avoided.
The data-consumer process should do the logic processing.
Offloading data, doing the logic in Cobol before loading, is an ancient one to be abandoned. Processing objects, information goes along with responsibilities.
❗ A data warehouse is allowed to receive semi-finished product for the business process.
✅ A data warehouse is knowing who is responsible for the inventory being serviced.
❗ A data warehouse has processes in place for deleivering and receiving verified inventory.
In a picture:
df_csd01.jpg The two vertical lines are managing who´s has access to what kind of data, authorized by data owner, registered data consumers, monitored and controlled.
The confidentiality and integrity steps are not bypassed with JIT (lambda).

CIA Confidentiality Integrity Availability. Activities.

CSD Collect, Store, Deliver. Actions on objects.

There is no good reason to do this also for the data warehouse when positioned as a generic business service. (EDWH 3.0)
Focus on the collect - receive side.
There are many different options how to receive information, data processing. Multiple sources of data - Multiple types of information.
df_collect01.jpg In a picture:
 
A data warehouse should be the decoupling point of incoming and outgoing information.
 
A data warehouse should validate verify the delivery on what is promised to be there. Just the promise according to the registration by administration, not the quality of the content (different responsibility).

Focus on the ready - deliver side.
A classification by consumption type:
df_delivery01.jpg In a picture:
 
There are possible many data consumers.
It is all about "operational" production data" - production information.
 
Some business applications only are possible using the production information.

RN-1.5.3 Info
Some mismatches in a value stream.
Aside all direct questions from the organisation many external requirements are coming in. A limited list to get en idea on regulations having impact on the adminsitrative information processing.
business flow & value stream.
Business Process top down Having a main value stream from left to right, the focus can be top down with the duality of processes - transformations and the product - information.
Complicating factor is that:
✅ Before external can be retrieved the agreement on wat is to retrieve must be on some level.
✅ Before the delivery can be fulfilled the request on what tot deliver must be there.
Business Process bottom up Having the same organisation, the focus can be bottom up with the layers in silos and separation of concerns.
Complicating factor is that:
❓ In the centre needed government information is not coming in by default. The request for that information is not reaching the operational floor.
😲 cooperation between the silos responsible for a part of the operating process are not exchanging needed information on the most easy way by default.


EDW development approach and presetation
BI DWH, datavirtualization.
Once upon a time there were big successes using BI and Analytics. The success were achieved by the good decisions, not best practices, made in those projects.
To copy those successes the best way would be understanding those decisions made. As a pity these decisions and why the were made are not published.
Lans_datavirtualise.jpg The focus for achieving success changed in using the same tools with those successes.
BI Business Intelligence has for long claiming being the owner of the E-DWH. Typical in BI is almost all data is about periods. Adjusting data matching the differences in periods is possible in a standard way. The data virtualization is build on top of the "data vault" DWH 2.0 dedicated build for BI reporting usage. It is not virtualization on top of the ODS or original data sources (staging).

dashboard BI Presenting data using figures as BI.
The information for managers commonly is presented in easily understandable figures.
When used for giving satisfying messages or escalations for problems there is bias to prefer the satisfying ones over the ones alerting for possible problems.
😲 No testing and validation processes being necessary as nothing is operational just reporting to managers.

df_dlv_bi-anl.jpg 💡 The biggest change for a DWH 3.0 approach is the shared location of data information being used for the whole organisation, not only for BI.
 
The Dimensional modelling and the Data Vault for building up a dedicated storage as seen as the design pattern solving all issues. OLap modelling and reporting on the production data for delivery new information for managers to overcome performance issues. A more modern approach is using in memory analytics. In memory analytics is still needing a well designed data structure (preparation).
 
😱 Archiving historical records that may be retrieved is an option that should be regular operations not a DWH reporting solution.
The operations (value stream) process is sometimes needing information of historical records. That business question is a solution for limitations in the operational systems. Those systems were never designed and realised with archiving and historical information.
⚠ Storing data in a DWH is having many possible ways. The standard RDBMS dogma has been augmented with a lot of other options. Limitations: Technical implementations not well suited because the difference to an OLTP application system.

RN-1.5.4 Info
many partitioned dws-s process cycle demo
Reporting Controls (BI)
The understandable goal of BI reporting and analytics reporting is rather limited, that is:
📚 Informing management with figures,
🤔 so they can make up their mind on their actions - decisions.
The data explosion. The change is the ammount we are collecting measuring processes as new information (edge).
📚 Information questions.
⚙ measurements data figures.
🎭 What to do with new data?
⚖ legally & ethical acceptable?
dashboard classic
When controlling something it is necessary to:
👓 Knowing were it is heading to.
⚙ Able to adjust speed and direction.
✅ Verifying all is working correctly.
🎭 Discuss destinations, goals.
🎯 Verify achieved destinations, goals.
 
It is basically like using a car.
Adding BI (DWH) to layers of enterprise concerns.
Having the three layers, separation of concern : At the edges of those layers inside the hierarchical pyramid interesting information to collect for controlling & optimising the internal processes. For strategic information control the interaction with the documentational layer is the first one being visible.

many partitioned dws-s process cycle demo Having the four basic organisational lines that are assumed to cooperate as a single enterprise in the operational product value stream circle, there are gaps between those pyramids.
 
Controlling them at a higher level is using information the involved parties two by two, are in agreement. This is adding another four points of information. Consolidating those four interactions point to one central point makes the total number of strategic information containers nine.

dashboard_airbus_a380.jpg
Too complicated and costly BI.
When trying to answer every possible question:
💰 requiring a of effort (costly)
❗ every answer 👉🏾 new questions ❓.
🚧 No real endsituation
continus construction - development.
 
The simple easy car dashboard could endup in an airplane cockpit and still mising the core business goals to improve

ETL ELT - No Transformation.
etl-elt_01.png Classic is the processing order:
⌛ Extract, ⌛ Transform, ⌛ Load. For segregation from the operational flow a technical copy is required. Issues are:
Translating the physical warehouse to ICT.
Diagram_of_Lambda_Architecture_generic_.jpg All kind of data (technical) should get support for all types of information (logical) at all kinds of speed. Speed, streaming, is bypassing (duplications allowed) the store - batch for involved objects. Fast delivery (JIT Just In Time).
💣 The figure is what is called lambda architecture in data warehousing.
lambda architecture. (wikipedia). With physical warehouses logistics this question for a different architecture is never heard of. The warehouse is supposed to support the manufacturing process. For some reason the data warehouse has got reserved for analytics and not supporting the manufacturing process.

RN-1.6

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RN-1.6.1 Info
wrh_selfsrvc-01.jpg
Selfservice - Managed
Self service sounds very friendly, it is a euphemism for no service. Collecting your data, processing your data, yourself.
The advantage for the customer is picking what is felt convenient found on some shelf. The disadvantages are: wrh_cntr_stor.jpg Have it prepared transported for you so it can processed for you. The advantages are a well controlled environment that also is capable of handling more sensitive stuff (confidential secres).
RN-1.6.2 Info
Maturtity Level 1-5
IT-Business Strategic Alignment Maturity- Jerry Luftman
Why -still- discuss IT-business alignment?
4. In search of mythical silver bullet
5. Focusing on infrastructure/architecture
7 Can we move from a descriptive vehicle to a prescriptive vehicle?

(see link with figure 👓)
💣 This CMM level is going on since 1990. Little progress in results are made. those can be explained by the document analyses and the listed numbers.
Going on the way to achieve the levels by fullfilling some action list as having done is a way to not achieve those goals. Cultural behanvior is very difficult to measure. Missing in IT is te C for communication: ICT.

Retrosperctive for applying collective intelligence for policy.
Ideas into action (Geoff Mulgan )
What's still missing is a serious approach to policy. I wrote two pieces on this one for the Oxford University Press Handbook on Happiness (published in 2013), and another for a Nef/Sitra publication. I argued that although there is strong evidence at a very macro level (for example, on the relationship between democracy and well-being), in terms of analysis of issues like unemployment, commuting and relationships, and at the micro level of individual interventions, what's missing is good evidence at the middle level where most policy takes place. This remains broadly true in the mid 2020s.
I remain convinced that governments badly need help in serving the long-term, and that there are many options for doing this better, from new structures and institutions, through better processes and tools to change cultures. Much of this has to be led from the top. But it can be embedded into the daily life of a department or Cabinet. One of the disappointments of recent years is that, since the financial crisis, most of the requests to me for advice on how to do long-term strategy well come from governments in non-democracies. There are a few exceptions - and my recent work on how governments can better 'steer' their society, prompted by the government in Finland, can be seen in this report from Demos Helsinki.
During the late 2000s I developed a set of ideas under the label of 'the relational state'. This brought together a lot of previous work on shifting the mode of government from doing things to people and for people to doing things with them. I thought there were lessons to learn from the greater emphasis on relationships in business, and from strong evidence on the importance of relationships in high quality education and healthcare. An early summary of the ideas was published by the Young Foundation in 2009. The ideas were further worked on with government agencies in Singapore and Australia, and presented to other governments including Hong Kong and China. An IPPR collection on the relational state, which included an updated version of my piece and some comments, was published in late 2012.
I started work on collective intelligence in the mid-2000s, with a lecture series in Adelaide in 2007 on 'collective intelligence about collective intelligence'. The term had been used quite narrowly by computer scientists, and in any important book by Pierre Levy. I tried to broaden it to all aspects of intelligence: from observation and cognition to creativity, memory, judgement and wisdom. A short Nesta paper set out some of the early thinking, and a piece for Philosophy and Technology Journal (published in early 2014) set out my ideas in more depth. My book Big Mind: how collective intelligence can change our world from Princeton University Press in 2017 brought the arguments together.

RN-1.6.3 Info
Technology push focus BI tools.
The technology offerngs are rapidly changing the last years (as of 2020). Hardware is not a problemtic cost factor anymore, functionality is. hoosing a tool or having several of them goes with personal preferences.
This has nothing to do with hard facts but everything with things like my turf and your fault. Different responsible parties have their own opinion how conflicts should get solved. In a technology push it is not the organisational goal anymore. It is showing the personal position inside the organisation.
🤔 The expectation of cheaper and having better quality is a promise without warrants .
🤔 Having no alignment between the silo´s there is a question on the version of the truth.

Just an inventarization on the tools and the dedicated area they are use at: Mat Turck on 2020 , bigdata 2020 An amazing list of all,kind of big data tools at the market place.
2019 Matt Turck Big Data Landscape

RN-1.6.4 Info
Changing the way of informing.
Combining the data transfer, microservices, archive requirement, security requirements and doing it like the maturity of physical logistics. It goes into the direction of a centralized managed approach while doing as much as possible decentralised. Decoupling activities when possible to get popping up problems human manageable small.
 
Combining information connections.
There are a lot of ideas giving when combined another situation: many partitioned dws-s process cycle demo 💡 Solving gaps between silos supporting the values stream. Those are the rectangular positioned containers connecting between the red/green layers. (total eight internal - intermediates)
💡 Solving management information into the green/blue layers in every silo internal. These are the second containers in every silo. (four: more centralised)
💡 Solving management information gaps between the silos following the value stream at a higher level . These are the containers at the circle (four intermediates).
Consolidate that content to a central one.
🎭 The result is Having the management information supported in nine (9) containers following the product flow at strategic level. Not a monolithic central management information system but one that is decentralised and delegate as much as possible in satellites.
💡 The outer operational information rectangle is having a lot of detailed information that is useful for other purposes. One of these is the integrity processes. A SOC (Security Operations Centre) is an example for adding another centralised one.
🎭 The result is Having the management information supported in nine (9) containers following the product flow at strategic level. Another eight (8) at the operational level another and possible more. Not a monolithic central management information system but one that is decentralised and delegate as much as possible in satellites.
🤔 Small is beautiful, instead of big monolithic costly systems, many smaller ones can do the job better an more efficiënt. The goal: repeating a pattern instead of a one off project shop. The duality when doing a change it will be like a project shop.

shp_cntr_load-2.jpg
Containerization.
We are used to the container boxes as used these days for all kind of transport. The biggest of the containerships are going over the world reliable predictable affordable.
Normal economical usage, load - reload, returning, many predictable reliable journeys.

shp_cntr_liberty.jpg The first containerships where these liberty ships. Fast and cheap to build. The high loss rate not an problem but solved by building many of those. They were build as project shops but at many locations. The advantage of a known design to build over and over again.
They were not designed for many journeys, they were designed for the deliveries in war conditions.

allaboutlean projectshop - building ship project shop.
to cite:
This approach is most often used for very large and difficult to move products in small quantities.
...
There are cases where it is still useful, but most production is done using job shops or, even better, flow shops.
💣 The idea is that everything should become a flow shop even when not applicable. At ICT delivering software in high speed is seen as a goal, that idea is missing the data value stream as goal.

Containerization.
Everybody is using a different contact to the word "data". That is confusing when trying to do something with data. A mind switch is seeing it as information processing in enterprises. As the datacentre is not a core business activity for most organisations there is move in outsourcing (cloud SAAS).
Engineering a process flow, then at a lot of point there will be waits. At the starting and ending point it goes from internal to external where far longer waits to get artefacts or product deliveries will happen. Avoiding fluctuations having a predictable balanced workload is the practical solution to become effciënt.
Processing objects, collecting information and delivering goes along with responsibilities. It is not sexy, infact rather boring. Without good implementation all other activities are easily getting worthless. The biggest successed like Amazon are probably more based in doing this very well than something else. The Inner Workings of Amazon Fulfillment Centers
Common used ICT patterns processing information. For a long time the only delivery of an information process was a hard copy paper result. Deliveries of results has changed to many options. The storing of information has changed also.
 
Working on a holistic approach on information processing starting at the core activities can solve al lot of problems. Why just working on symptoms and not on root causes?
💡 Preparing data for BI, Analytics has become getting an unnecessary prerequisite. Build a big design up front: the enterprise data ware house (EDWH 3.0).

Data Technical - machines oriented
The technical machines oriënted approach is about machines and the connections between them (network). The service of delivering Infrastructure (IAAS) is limited to this kind of objects. Not how they are inter related.
The problem to solve behind this are questions of:

df_machines.jpg 🤔 A bigger organisations has several departments. Expectations are that their work has interactions and there are some central parts.
Sales, Marketing, Production lines, bookkeeping, payments, accountancy.
🤔 Interactions with actions between all those departments are leading to complexity.
🤔 The number of machines and the differnces in stacks are growing fast. No matter where these logical machines are.
For every business service an own dedicated number of machines will increase complexity.

The information process flow has many interactions, inputs, tranformtions and outputs.
💡 Reinvention of a pattern. The physical logistic warehouse approach is well developed and working well. Why not copy that pattern to ICT? (EDWH 3.0)

printing delivery line
What is delivered in a information process?
The mailing print processing is the oldest Front-end system using Back-end data. The moment of printing not being the same of the manufactured information.

Many more frontend deliveries have been created recent years. The domiant ones becoming webpages and apps on smartphones.
A change in attitude is needed bu still seeing it as a delivery needed the quality of infomration by the process.

Change data - Transformations
A data strategy helping the business should be the goal. Processing information as "documents" having detailed elements encapsulated. Transport & Archiving aside producing it as holistic approach.
shp_cntr_clct.jpg Logistics using containers.
The standard approach in information processing is focussing on the most detailed artefacts trying to build a holistic data model for all kind of relationships. This is how goods were once transported as single items (pieces). That has changed into: containers having encapsulated good.
💡 Use of labelled information containers instead of working with detailed artefacts.

shp_cntr_store.jpg 💡 Transport of containers is requiring some time. The required time is however predictable. Trusting that the delivery is in time, the quality is conform expectations, is more efficiënt than trying to do everything in real time.

shp_cntr_dlv.jpg Informations containers have arrived almost ready for delivery having a more predictable moment for deliveriy to the customer.
💡 The expected dleivery notice is becoming standard in physical logistics. Why not doing the same in adminsitrative processes?

Data Strategy: Tragic Mismatch in Data Acquisition versus Monetization Strategies.
A nice review on this, "eOrganizations do not need a Big Data strategy; they need a business strategy that incorporates Big Data" Bill Schmarzo 2020.
value for the money
Companies are better at collecting data ? about their customers, about their products, about competitors ? than analyzing that data and designing strategy around it. Too many organizations are making Big Data, and now IOT, an IT project. Instead, think of the mastery of big data and IOT as a strategic business capability that enables organizations to exploit the power of data with advanced analytics to uncover new sources of customer, product and operational value that can power the organization?s business and operational models

The Dialectical Thought Form Framework (DTF) source
A tale of two architectures - Kimball vs Inmon
Into the miasma came Bill Inmon's best selling book - BUILDING THE DATA WAREHOUSE. The industry accepted definition of a data warehouse - "a subject oriented, integrated, non volatile, time variant collection of data for management's decision making".
But there was another related architecture that arose in roughly the same time frame. That architecture is the one that can be called the "Kimball" architecture. It is the Kimball architecture that is associated with Red Brick Systems.
DWH 2.0 The current state DWH 2.0 in a figure (sse right side).

Those 4 levels: are a reflection of what happens in organisations processing the flows.
The DW 2.0 architecture then represents the evolving architecture for data warehouse. It contains the best features of the Inmon architecture and the Kimball architecture can be combined very adroitly. DW 2.0 represents a long term architectural blueprint to meet the needs of modern corporations and modern organizations.

🔰 Contents Frame-ref ZarfTopo ZarfRegu SmartSystem ReLearn 🔰
  
🚧  Knowium P&S-ISFlw P&S-ISMtr P&S-Pltfrm Fractals Learn-I 🚧
  
🎯 Know_npk Gestium Stravity Human-cap Evo-InfoAge Learn-@2 🎯


RN-2 The impact of uncertainty to information processing


dual feeling

RN-2.1 Reframing the thinking for decision making

This is a different path on information processing supporting for governance and informed understandable decisions. This all started with an assumption in certainty for knowledge management, collective intelligence. Decisions however are made in assumptions and uncertainty.

RN-2.1.1 Distinctions containing tensions in grammar
A culture in understanding defining concepts
Before we argue about systems, we need to define definitions (LI: A Abduhl 2026)
We use different kinds of definitions for different purposes, without noticing. We often conflate:
  1. Lexical definitions - describe how a term is commonly used
  2. Theoretical definitions - specify how a term functions within a theory
  3. Stipulative definitions - declare meaning for a specific context ("for this project…")
  4. Operational definitions - define meaning through measurement or execution
  5. Persuasive definitions - frame meaning to influence behaviour or belief
  6. Precising definitions - narrow an existing concept to reduce ambiguity "across contexts"
  7. Meta-Semantic Definitions - how meanings themselves are constructed, selected, or transformed across contexts.
👐 It doesn't define a term, it defines the rules for defining. Think of it as the governance layer for meaning. What this 7th layer does: This is exactly the kind of layer I use in JABES: a semantic governance layer. This seventh layer is what allows to build fractal, recursive, multi-perspective governance models, my home turf.
The first six: none are wrong in isolation.
But when we slide between them unconsciously or tell lay audiences "there are no such things as systems", we create confusion and end up talking past each other.
If we want to coordinate action, we need to get past arguing about "the right definition" and be explicit about purpose. Here I suggest a "precising" definition of a system. It doesn't try to resolve tensions, it surfaces them.
👐 "A system is a set of interconnected elements whose relationships, constraints, and structure generate emergent behaviours different from those of the isolated parts, which may be recursively nested across multiple scales. It is distinguished by boundaries (physical or conceptual), operates through feedback loops, and may maintain identity through regulation and adaptation - though its definition and boundaries are ultimately determined by the observer's perspective and intent."
What is happening is: The word grammar is a construct for operations in languages making up sentences for a meaning in communication. That holds an observer dependency in the communication using sentences.

A culture in understanding defining concepts
The continuation of A.Abduhl for his goal was in doing a more precise defintion on system-2 in ViSM (viable systems). The tension is about inside outside thinking in systems thinking.
👁 It's a deliberate synthesis, grounded in the several traditions:
  1. Interconnection & emergence ➡ Ludwig von Bertalanffy
  2. Feedback & regulation ➡ Wiener, Ashby
  3. Viability & identity ➡ Beer
  4. Observer, purpose, boundary choice ➡ Checkland, Heinz von Foerster
Moving beyond a false dichotomies
👁 (Systems) thinking oscillates between two positions:
  1. "Reality is out there": objective entities waiting to be discovered.
  2. "Reality is socially constructed": - Systems are narratives shaped by perspective and purpose.
Both are incomplete. This tension didn't start with systems thinking. As Kant showed, reality is real, but never encountered unmediated. More recently, Iain McGilchrist makes the same move from a different angle:
  1. "Reality is real": but our access to it depends on how we attend to it.
That maps directly to systems thinking: As Checkland put it: systems are formulated, not found. The irony: the debate itself is a systems failure. The endless swing between: "systems are out there" and "systems are constructed", is itself a system oscillation.
In VSM, this is a System 2 failure to damp oscillation between competing logics.
👐 My precising definition is a System 2 move: This is the essence of the Cynefin framework in phase shifts.
Hard systems thinkers worry that acknowledging observer dependence makes everything subjective.
Soft systems thinkers worry that acknowledging structure smuggles objectivism back in.
What's crucial is recognising that systems practice requires both:
  1. Observer-independence in structure and dynamics
    feedback loops, constraints, and causal relationships that persist regardless of observation (Ashby, Forrester, Wiener etc).
  2. Observer-dependence in framing and relevance
    boundaries, purposes, and what counts as "the system" are always brought forth by an observer in relation to intent (Checkland, HvF, Ulrich etc).
The synthesis isn't new either. See Gerald Midgley's boundary critique, Michael Jackson's CST and more recently Derek Cabrera's DSRP. Without holding both sides, I'm not sure there's a meaningful debate at all e.g appreciating single vs multiple causation, or structure vs interpretation.

The Dialectical Thought Form Framework (DTF)
Thinking dialectal for underpinning at decisions the source is limited in names and history it is a recent development. This is far beyond the personal comfort zone but LLM usage is helpful. A LLM can see the DTF as grammar and its usage as sentences. There is a lot of management and philosophical content accessible for meaningful knowledge.
The names to start with:
👁️ Otto Laske is a multidisciplinary consultant, coach, teacher, and scholar in the social sciences, focused on human development and organizational transformation. Jan De Visch is an organization psychologist, executive professor, and facilitator with extensive experience managing organizational development and change processes.
Key contributions: Dialectical Thought Form Framework (DTF) is aimed at understanding and nurturing reasoning complexity: how people structure thought as they handle context, change, contradiction, and transformation.
The four dtf types one of them transactional The counterpart of this page 6x6systemslean (Shape design Zarf Jabes Jabsa) asked to verify in overlap and differences. The result of that is interesting:
It is not descriptive systems thinking (formal-logical), it is meta-structural systems thinking. This is the same territory Laske calls dialectical, DTF is operating in the same cognitive space.
Key indicators (DTF markers) present throughout 6x6systemslean: The work consistently combines: The overlap is deep, but unevenly distributed across DTF categories.
The four dtf types one of them transactional moving in time  Important boundaries, There are also clear non-overlaps, which is healthy. What DTF has that my ideas does not aim to do: What 6x6systemslean has that DTF does not DTF is diagnostic, 6x6systemslean is generative, they are complementary, not redundant.
The SIAR model operationalizes dialectical thinking at the system-design level, while DTF explicates the cognitive forms required to meaningfully operate such a model.
👐 This is an opening to connect what has developed into very soft-thinking back to more hard-thinking in seeking the balance.

RN-2.1.2 Using DTF as one of the perspectives aside Zarf Jabes etc.
The Dialectical Thought Form Framework (DTF) summary
Dialectical Thought Form Framework (DTF) consists of 4 categories (quadrants), each with 7 Thought Forms (TFs), for a total of 28. The standard IDM / Laske formulation, wording can vary slightly across publications and trainings, but the structure is stable. There are for each categories:
Context (C) 👐 Process (P)
C1 - Context as container P1 - Process as a whole
C2 - Contextual limits / boundaries P2 - Process phases
C3 - Contextual resources P3 - Process directionality
C4 - Contextual embeddedness P4 - Process rhythm / pace
C5 - Contextual dependency P5 - Process interaction
C6 - Contextual shift P6 - Process interruption
C7 - Contextual layering (multiple contexts) P7 - Process stabilization

There are for each categories:
Relationship (R) 👐 Transformation (T)
R1 - Relationship as mutual influence T1 - Emergence
R2 - Structural relationship T2 - Transformation of function
R3 - Functional relationship T3 - Transformation of structure
R4 - Power / asymmetry T4 - Breakdown / negation
R5 - Complementarity T5 - Reorganization
R6 - Tension / contradiction T6 - Developmental leap
R7 - Relational integration T7 - Integration at a higher level

Each class, Process (P), Context (C), Relationship (R) and Transformation (T) captures a way of thinking, from seeing events in relation to conditions, diagnosing interdependencies, and dealing with contradictions, to achieving integrative transformation. This is typically used: This is a generic thinking approach that is usable on groups of persons and systems acting is a similar way. That is different boundary scope than DTF has got growing in.
Using six catergories to do learning dialectual thinking.
The text is derived for a course offering. Increasingly, the issues on which the survival of our civilization depends are 'wicked' in the sense of being more complex than logical thinking alone can make sense of and deal with. Needed is not only systemic and holistic but dialectical thinking to achieve critical realism. Dialectical thinking has a long tradition both in Western and Eastern philosophy but, although renewed through the Frankfurt School and more recently Roy Bhaskar, has not yet begun to penetrate cultural discourse in a practically effective way.
👉🏾 We can observe the absence of dialectical thinking in daily life as much as in the scientific and philosophical literature.
It is one of the benefits of the practicum to let participants viscerally experience that, and in what way, logical thinking, although a prerequisite of dialectical thinking, is potentially also the greatest hindrance to dialectical thinking because of its lack of a concept of negativity. To speak with Roy Bhaskar, dialectical thinking requires "thinking the coincidence of distinctions" that logical thinking is so good at making, being characterized by "fluidity around the hard core of absence" (that is, negativity, or what is missing or not yet there).
👉🏾 For thinkers unaware of the limitations of logical thinking, dialectical thinking is a many-faced beast which to tame requires building up in oneself new modes of listening, analysis, self- and other-reflection, These components are best apprehended and exercised in dialogue with members of a group led by a DTF-schooled mentor/facilitator.
There is a nice duality dichotomy in this, the course design is offered as a lineair path. For the content what it is about it is about non-linearity.
The practicum takes the following six-prong approach:
  1. Foundations of Dialectic: Understand moments of dialectic and classes of thought forms and their intrinsic linkages as the underpinnings of a theory of knowledge.
  2. Structured dialogue and communication: Learn how to use moments of dialectic when trying to understand a speaker's subject matter and issues, or when aiming to speak or writing clearly.
  3. (Developmental) listening and self-reflection Learn to reflect on the thought form structure of what is being said by a person or an entire group in real time
  4. Text analysis: Learn to understand the conceptual structure of a text (incl. an interview text) in terms of moments of dialectic and their associated thought forms as indicators of optimal thought complexity.
  5. Question & problem generation and formulation Learn how to formulate cogent and visionary questions (including to yourself), and give feedback based on moments of dialectic and their associated thought forms
  6. Critical facilitation Learn how to assist others in understanding what they are un-reflectedly saying, thinking, or intending
Acquiring these six, mutually supportive capabilities takes time and patience with oneself and others. It goes far beyond 'skill training' since participants need to engage in revolutionizing their listening, way of thinking, structure of self-reflection, and attention to others' mental process, something that logical thinkers for whom the real world is "out there" (not "in here") are not accustomed to.
The four dtf types in a wheel
The four dtf types in a wheel conform what was done at Zarf, see right side.

👁️ For a limiting scope context in dialectual thinking usage, I am not: The interest is at: Architectural, Systemic, Comparative (Zarf Jabes Cynefin), artifacts: models, texts, strategies, frameworks. Working at the level of: Architectural, Systemic, and Comparative sensemaking across frameworks and artifacts. That is DTF-informed systems design, Second-order observation, not diagnosis.
💰 I use DTF concepts as an interpretive lens to examine the developmental affordances, constraints, and ceilings of architectural and systemic artifacts, and to compare frameworks across Zachman and Cynefin domains."
The Zarf Jabes approach triangulates: 🔏 DTF is one lens, not the authority this avoids: Reductionism, Overreach, Pseudo-psychometrics.
The repeating Autonomous repetitions for limited number in distinctions
Feeling a repeating pattern of ~6 distinctions in a dimension. The question is why this happens, the generative LLM answer:
🔏 A dimension is only "adequately articulated" when it can express internal tension, differentiation, and integration without self-contradiction. To do that, you need at least:
  1. A grounding distinction
  2. Differentiation
  3. Relational mediation
  4. Tension / contradiction
  5. Negation or boundary
  6. Re-integration
That already gets you to 6. The 7th is not additiv , it is closure at a higher level. This convergence shows up again and again.
"framework-ish" complete but navigable 10+ encyclopedic
Too few (LE 4) Optimal (6-7) Too many (GE 9)
Oversimplified Expressive Redundant
No contradiction Tension + resolution Loss of salience
Binary thinking Dialectical movement Semantic inflation

The search for the optimal number of distinctions.
A simplified answer.
Zarf Jabes started from: And ended up with: 🔏 That is not coincidence, it is a sign of working against the same cognitive constraints that DTF formalizes. The "6-7 distinctions per dimension" rule is not a design choice but an empirically and dialectically grounded minimum required for stable, non-redundant articulation of complex meaning.

RN-2.1.3 Reframing the SIAR model sing dialectal abstractions
Situation Input Actions Results, SIAR lean structured processing
The cycle dialectal: Sense - Interpret - Act - Reflect
What is not done: replace SIAR with DTF labels, instead: Think of this as SIAR with its cognitive mechanics exposed.
👁️ S , Sense Situate the situation within its enabling and constraining contexts.
DTF language (dominant: Context + Relationship): Key dialectical move: "What contextual conditions make this situation what it is?"
This is not data gathering , it is situated sense-making.
👁️ I, Interpret Structure meaning by relating elements, perspectives, and tensions.
DTF language ((dominant: Relationship)): Key dialectical move: "How do these elements mutually shape and constrain one another?"
Interpretation is relational structuring, not explanation.
👁️ A , Act Intervene in ongoing processes to test and influence system behavior.
DTF language (dominant: Process): Key dialectical move: "Where and how can we intervene in the process as it unfolds?"
Action is processual engagement, not execution of a plan.
👁️ R , Reflect Transform frames, assumptions, and structures based on what emerges. DTF language (dominant: Transformation): Key dialectical move: "What must change in how we frame the system for the next cycle?"
Reflection is structural reframing, not evaluation.

The cycle grammar to sentence: Sense - Interpret - Act - Reflect
The implied cycle, one of the variations of a time dimension.
⚖️ Important: In practice: it situates contexts, structures relations, intervenes in processes, and transforms frames, whether or not this is made explicit.
SIAR Plain wording Dominant DTF move
Sense Situate the situation Contextualization (C)
Interpret Structure meaning Relational integration (R)
Act Intervene in process Process engagement (P)
Reflect Reframe the system Transformation (T)
The mapping of the reframed SIAR to DTF dimensions, see table.
🤔 The Transformation is dialectical different, but it is the interpret "relational integration" that becomes the object when projected in a 9-plane.

⚖️ The search is for content lay-outs to explain this. It should be a dialectical closure that fulfils the following requirements. It is minimal but complete in lower-bound articulation: 🔰 We sense a problem, execute an intervention, observe effects, and eventually reflect on what the system's real purpose is.
Sense Act Reflect
Context Problem Mandate Reframe
Process Signal Execute Learn
Outcome Effect Stabilize Purpose
What does change: This makes SIAR robust under complexity.
An alternative using other words but same grammar.
Aim, plan, and execution are not dimensions but sentences spoken across the Context- Process- Outcome and Sense- Act- Reflect grammar, with execution necessarily occupying the center.
Sense Act Reflect
Context Aim Govern Adjust
Process Plan Execute Improve
Outcome Assess Achieve Purpose
These are other logical levels: Some alternatives would fail (important) because they would break dialectical closure. ⚠ Same words are used in different locations for a different intended meaning. This breaks the idea, assumption, of a shared language would always help in solving misunderstandings.
RN-2.1.4 Diagnosing dialectal the broken system in decision making
The agentic AI shift int the process of decisions
Why Human-Centric Design Breaks in Agentic Systems and What to Do Instead (LI: J.Lowgren 2025) 🤔 Most teams still design like the human is always in charge. That worked when software was a tool in a human's hand. It breaks when software is an actor with its own perception, its own objectives, and the right to act. The result is familiar; a chatbot that sounds empathetic but never escalates, a logistics optimiser that saves fuel and blows delivery windows, a fraud detector that performs well at baseline and collapses during a surge.
🚧 None of that is a bug. It is design that started in the wrong place.
JLowgren_doublediamond.png
The Agentic Double Diamond begins with inversion; cognitive design from inside the agent's world. It continues with authopy; system design that encodes data, activation, and governance.
The goal of this: autonomy is trusted and traceable. At the centre sit roles and cognition; the explicit boundary between what agents do and what people must decide.
🤔 Teams that work this way waste less time apologising for their systems. They spend more time improving them. That is the difference between software that merely runs and software that behaves. That is the difference between pace and regret.

Agentic Governances a redirected book
The LI posts were dialectical more rich than this book that is the result. Making Intelligence Trustworthy (Zeaware -J.Lowgren 2025 Note: the form is hidden when strict tracepreventation is activated)
This is not a book about the present state of AI. It is about the threshold we have just crossed, the shift from automation to autonomy, from decision rules to decision flows, from governance as control to governance as coordination. The work ahead is not to restrain intelligence but to ensure it remains account-able as it learns, negotiates, and changes shape.
🤔 The paradigm unfolds through three companion volumes, each viewing the same transformation from a different altitude: Together they form the Agentic Trilogy; a framework for building, govern-ing, and evolving intelligent systems that can explain themselves, adapt respon-sibly, and sustain human intent at machine speed. Key points:

The devlopment of governance and leadership
The dialectal thinking framework is mentioning leadership, working at leaders. The question is how that is related to a strategy of governance strategy.
See the document at: Human Developmental Processes as Key to Creating Impactful Leadership (researchgate Graham Boyd Ottol Laske 2018) The analysis for holacracy:
In the Shared Leadership document, Laske is operating squarely inside the Constructive Developmental / DTF frame. Key characteristics of his position: When Laske references holacracy, he is not endorsing it as a governance solution per se, he uses it as an example of role-based, non-hierarchical authority distribution that requires sufficient developmental capacity to function. In DTF terms, Laske's shared leadership presupposes: 💡 Holacracy is therefore treated as a container for shared leadership, not as an autonomous governance mechanism. (operates primarily T3 ➡ early T4)
Lowgren: Polycracy and agentic governance move is fundamentally different. Key characteristics: 💡 This is why polycracy is the right word here, not shared leadership. Polycracy is about multiple centers of agency, not merely multiple leaders. Lowgren explicitly steps beyond role-sharing among humans into: This is already post-holacratic (operates late T4 ➡ T7)
🎭✅ They match as important alignment, both: Laske's shared leadership describes the developmental conditions under which leadership can be distributed among humans. Lowgren's polycratic governance describes how leadership itself migrates into socio-technical infrastructures, including agentic AI. The transition between the two is not organizational but developmental.
A more detailed analyses and connection to transformations to refine.

feel order

RN-2.2 A new path in thinking - reflections

In this new era reflection in thinking has become possible using AI based on grammar forming sentences and a lot of open accessible sources. It is not about simple prompt questions but how to usage what is all there beyond the human capacity to process in a sensible way.

RN-2.2.1 Understanding of options in the many confusing AI types
knowledge management, getting help by a machine
What is missing is AI literacy the hype, buzz, in AI is causing more noise and confusion than better understanding. An ateempt for a very simplified breakdown:
AI literacy Cognitive capacity
1 AI is a generic noun for new technology Used for all kind of stuff machines can do in processes using technology.
2 LLM large language models are for text Using text/speech as communication it is not a about better calculator or anything in Science, Technology, Engineering, and Mathematics (STEM) usage.
👉🏾 It is based on al lot of probabilistic in text and there is lot of good accessible text around.
3 ML machine learning (big data based) ML is very good in supervised generating better aid in decisions. It is probabilistic so there is a need to understand and manage uncertainty in results. Quite different than basic simple algorithms using formulas for the only possible correct outcome.
4 Dedicated bound domain AI usage Dedicated domains are those from learning chess, go, that extended to STEM domains usage in recognizing special patterns.
⚒️ ANPR camera's , reading text from scans, Face recognition, fingerprint recognition, the moving analyses in sport etc. There is a sound theoretical model behind those patterns where the analyses is build on.
⚒️ Optical readable text (OCR) automatic translation of text is not seen as AI anymore but it is.
5 Defining dedicated domains Enabling overlap with product/technology From a sound theoretical model it is possible to start with better reasoning.
👉🏾 There is need for a well defined design theory. The missing part of design theory is where there is the gap now.
👉🏾 Training a LLM won't be very practical it will miss the boundaries and context for what is really needed. These must set by design in AI for that defined scope. This is bypassed by building up a dedicated boundary while working on the topic.
6 Ai generating the code for the design Having al well defined design for the minimal what is practical needed, the next challenge is the transformation into programming languages that are appropriate for the job.
⚒️ The last part is not really new. Would the language be Cobol than there products of the 90's trying to do that e.g. Coolgen. This is a signal we need to have a generic design/knowledge system to prevent a technology-lockin for generating code.
⚒️ The other point that it gives a signal for is that the resulting code should be based on understandable proven patterns but also having the options for extending into adjusted new patterns doing the job better. Also at this point there is need to prevent a technology-lockin. Nothing really new at this there was a time to standardize metadata to code generation using predefined standard patterns. The Common warehouse metamodel CWM an attempt to standardize dat to information processing OMG the institute for the CMW standard DMG the institute for well known data mining processes.
7 Transformational Re-framing the chosen solution, ongoing change will adopt some of this and while adding much more.

One additional important aspect in this is moving cyber-security safety into these functional processing layers. This will solve the ongoing issues of failing cyber-security by relocating them where the activities are now positioned where they cannot be solved structural.
Common constraints when managing change
The iron triangle The Architecture of Illusion (LI: A.Dooley 2025)
Some things are worth repeating. The term 'Iron Triangle' was coined in 1956 in relation to the legislative process in the USA. It has nothing to do with project management.
The iran triangle - The triple constraint
1 Low regulations, special favors Functionality
2 Funding & political support Time
3 Electoral support Cost
4 Congressional support via lobby Scope
5 Friendly legislation & oversight Quality
6 Policy choices & execution Quantity
7 To add, it is missing Realisations by transformation

Only three are mentioned by Barnes but there are at least three more and there is transformation.
The other three are: Functionality, quality, quantity.
This gives a total of 6 distincions.
The Barnes Triangle (more recently the Triple Constraint) was created by Dr. Martin Barnes in 1979. It has everything to do with project management.
The purpose of the triple constraint is to start a conversation about finding a balance between the constraints that is acceptable to all parties. There is nothing about it that is cast in iron and inflexible.
Dooley triplet riangle
Constraints in the legislative process using named stakholders a different context than constraints in project management using distinctions.

Summary of the DTF framework Zarf Jabes overlay comparison
There are several aspects that got reviewed. Feeding my pages and mentioning other sources. To my surprise the LLM got far in the reflection of this kind of cognitive thinking.
👁️ Evaluating Zarf Jabes in DTF constructs
Zarf Jabes Jabes is giving a meaning at the "Shape Systems Thinking: 6x6 Lean & Zachman Augmented Framework" page. The idea is that to manage complexity, one must see multiple interdependent dimensions, not just a single linear process, that is not descriptive systems thinking (formal-logical). It is meta-structural systems thinking, the same territory Laske calls dialectical. Key indicators (DTF markers) present throughout that are: This places it beyond Context-only (C) and Relationship-only (R) thinking. Consistently combines: SIAR = Sense ➡ Interpret ➡ Act ➡ Reflect. This is where the overlap becomes very concrete. SIAR is not: just a learning cycle, only PDCA with different labels, merely process optimization.
➡️ Cognitively, SIAR is a recursive meaning-construction loop.
👁️ Testing for hitting upper and lower bounds
There is a good balance neither of the bounds is passed.
➡️ Lower bound (under-articulation), a dimension hits the lower bound when: Symptom: "I kind of get it, but I don't know what to do."
➡️ Upper bound (over-articulation), a dimension hits the upper bound when: Symptom: "This is rich, but I'm lost."
👁️ Aliging the used vocabularies in change
In Laske's sense, Transformation (T) is not "change over time", that's Process (P).
T-forms enable: Key T-moves relevant to the Jabes Zarf framework: Keep those three in mind, they recur everywhere.
This is very promising to carefully investigate what is more possible.

RN-2.2.2 Asking not only results (appeasing) but also the reasoning
DTF Alignment to 6x6 Zarf, the AI reasoning
Lean cycles like PDCA/SIAR are about iterative improvement based on experience and evidence, which resonates with Process (P) and Transformation (T). Stresses duality and dichotomy (e.g., engineering vs system change, frontend vs backend). In DTF: Key indicators (DTF markers) present: This already places the page beyond Context-only (C) and Relationship-only (R) thinking.
Consistently combines: Dominant mapping of the 4 categories to the 6*6 reference.
What How Where Who When Which
Scope / Vision C C C R P C
Conceptual (meaning structures) R R C R P C
Logical (coherence & consistency) R P R R P R
Physical (realization) R P R R P P
Operational (running system) P P R R P P
Reflective / Lean / Learning T T T T T T

Dominant mapping of the 4 categories to the 6*6 reference.

If you step back, a vertical gradient appears:
  1. Top rows ➡ Context & Relationship
  2. Middle rows ➡ Relationship & Process
  3. Lower rows ➡ Process
  4. Bottom row ➡ Transformation
This is exactly the developmental movement Laske describes:
Where Transformation is structurally required (non-optional)
Three places cannot be worked without T-forms: This explains why many people: understand the grid, but cannot use it effectively.

Testing JAbes- Zarf for Completeness for all DTF's
All 28 TFs are present, no gaps, no redundancies, that is not common.
👁️ In DTF (Laske), the 28 TFs are, structurally: Examples (schematic, not full list): A framework "covers" a TF only if it forces the thinker to perform that operation. Naming something is not invoking a TF.
👁️ When people map rich frameworks (Zachman, VSM, Cynefin, SAFe, etc.) to DTF, the pattern is almost always:
⚠️ Typical pattern A - Gaps Results to rigid systems ⚠️ Typical pattern B - Redundancies Creates conceptual noise. ⚠️ Typical pattern C - Skew Explains why people "can't use" the framework.
Most frameworks are built from one of three starting points:
  1. Managerial practice overweights Process & Relationship
  2. Philosophical theory overweights Context & Transformation
  3. Technical architecture overweights Process
Your framework did not start in one place. It was built by iterative reconciliation of contradictions: That dialectical construction is exactly what DTF measures.
👁️ There are inherent failure modes by misunderstanding, wrong usage, structural failure points without DTF T-forms.
⚠️ Failure mode A: Grid treated as static classification
What happens: diagnosis: ⚠️ Failure mode B: Local optimization destroys system coherence
What happens: diagnosis: ⚠️ Failure mode C: Fractal scaling collapses
What happens: diagnosis: These are T-failures, not design flaws.

RN-2.2.3 Asking for the reasoning in adjusted 3*3 frames
Situation Input Actions Results, SIAR lean structured processing
The cycle dialectal: Sense - Interpret - Act - Reflect
The SIAR model got reframed in dialectual sentences.
I used variations of the 3*3 frame. The reframe in dialectual sentences is another perspective of the simplified projection in four levels for roles/tasks. The reasoning of the LLM is the most interesting part.
Sense Act Reflect
Context Problem Mandate Reframe
Process Signal Execute Learn
Outcome Effect Stabilize Purpose

This grid is not a taxonomy, it is a sentence generator.
Population of the 9 cells, this is the key step, naming each cell by its function, not by a label fetish.
👁️ Row 1: Context Problem ➡ Mandate ➡ Reframe
"Why is action allowed at all?"
👁️ Row 2: Process Signal ➡ Execute ➡ Learn
"Who does what, and answers for it?"
👁️ Row 3: Outcome Effect ➡ Stabilize ➡ Purpose
"What does this produce in the world?"
➡️ Why problem is upper-left: ➡️ Why purpose (POSIWID) is lower-right: "The purpose of a system is what it does.", not: what it intends to do, what it claims to do, what it is designed to do …but what it reliably produces over time.
That single sentence already implies time, outcomes, and reflection.

Explanation of "breaks Beer's insight"
If you place purpose: at the start, at the top or as an input, you are no longer describing POSIWID.
What happens if purpose is not lower-right
  1. Purpose in Context * Sense
    This turns purpose into: ""What we think we're here for."
    that is mission, losing: feedback, accountability, learning
  2. Purpose in Process * Act
    This turns purpose into: "What we are trying to achieve while acting."
    That is strategy-in-use, losing: unintended consequences, structural bias, emergent effects
  3. Purpose in Outcome * Act
    This turns purpose into: "What we decide to count as success."
    That is KPI-driven control, losing: ethical reflection, systemic side effects, long-term viability
Beer's conditions why breaks Beer's insight
Observed behavior purpose becomes a cause, not an effect
Persistence over time learning becomes self-confirming
Independence from stated intent the system becomes blind to itself

💣 Intention, mission, aspiration, ideology, those may matter, but they are not Beer's purpose.
Breaking the conditions in Beer's terms, the system becomes unviable, because it cannot correct its own misalignment. POSIWID requires purpose to be inferred retrospectively from outcomes, not asserted prospectively as intent.
Tensions in the diagonals: learning emergence vs normative intentional
In the reasoning a diagonal got mentioned.
The hidden important diagonal, Upper-left ➡ lower-right, notice the diagonal tension: That diagonal is the learning journey. If a framework does not allow this diagonal to be traversed, it cannot learn.
🤔 In a 3*3 reference with Context- Process- Outcome and Sense- Act- Reflect, the problem appears as a contextual negation in sensing, while purpose (POSIWID) emerges only as an integrated reflection on stabilized outcomes.
Very well the LLM halted her and no clue for the other diagonal, using an explicit questions for that to continue.
The other diagonal: lower-left ➡ upper-right That diagonal is: Effect ➡ Execute ➡ Reframe or, abstracted Consequences ➡ Action ➡ Values.
This is not learning-first, It is normative and intentional.
  1. Lower-left: Outcome * Sense ➡ "What do we want?"
    This is: anticipation, aspiration, imagined effects, intention,
    not "what is", but "what ought to be"
  2. Center: Execute, Same center, no surprise. Both diagonals must pass through execution.
    Execution is where: intention meets reality, values get tested, consequences are produced
  3. Upper-right: Context * Reflect ➡ "What matters?"
    This is: values, principles, norms, legitimacy, ethics (broadly, not morally) to decide for:
    "What kind of system do we want to be?"
So what is this diagonal called? If the first diagonal is the learning / POSIWID diagonal, this second one is the normative / intentional diagonal. In plain words: They must intersect, but must not collapse into each other. You sensed correctly that Context- Reflect and Outcome- Sense don't feel like "facts".
That's because they aren't. So a parallel 3*3, using normative language, often helps.
Alternative wording (same structure, different semantics)
👁️ Row 1: Context Values ➡ Principles ➡ Legitimacy
"Why is action allowed at all?"
👁️ Row 2: Process Options ➡ Execute ➡ Responsibility
"Who does what, and answers for it?"
👁️ Row 3: Outcome Intent ➡ Deliver ➡ Meaning.
"What does this produce in the world?"
🔏 This is not linear planning, It's accountable emergence. Why this fails that often in collapsed time and destroyd learning.: Each row expresses a temporal shift from anticipation through action to retrospective judgment, preventing values, responsibility, and meaning from being declared before they are earned.
Before action During action After actions
Imagine Commit Justify
Context Values Principles Legitimacy
Process Options Execute Responsibility
Outcome Intent Deliver Meaning

A hidden tension, many ways time is acting on the system:
  1. Left to right for each row
  2. the cycle e.g. produc/service
  3. the state of the system

Each row expresses a temporal shift from anticipation through action to retrospective judgment, preventing values, responsibility, and meaning from being declared before they are earned.
🔏 Why both diagonals are needed (critical), If you use only: Dialectical closure requires both diagonals to be visible and in tension, that tension is healthy.
The upper-left to lower-right diagonal explains how meaning emerges from action, while the lower-left to upper-right diagonal explains how values and intentions seek expression through action.

X-matrix allaboutlean
RN-2.2.4 The challenge: "From Tension to Direction"
This not a control framework, it is a time-respecting grammar for collective action. That's why it scales fractally and why it feels unfamiliar to command-and-control thinkers. A jump out of the box.
🎯 The Two-Diagonal Facilitation Move "From Tension to Direction" is a two-diagonal facilitation move you can use with teams, communities, or policy groups. It works on a whiteboard, Miro, or paper, no theory explanation required. Use it when: Goal: We'll look at what's actually happening, followed by what should matter, then adjust what we do so the two line up. Time: 15- 30 minutes Group size: 3- 12.
Sense Act Justify
Context Problem 🕳 Values
Process 🕳 Execute 🕳
Outcome Intent 🕳 Purpose
Imagine Commit Reflect

⚖️ Step 1 ??" Draw only this (don't explain yet).
Intentionally; do not fill the other cells.
Note: this the x-matrix in disguise made universally usable

⚖️ Step 2 ??" Diagonal 1: What is actually happening? (Learning)
Ask only these three questions, in order:
  1. Problem (Context * Sense): What is not working right now, for real?
  2. Execute (Process * Act): What are we actually doing about it ??" not planning, not intending?
  3. Purpose (Outcome * Reflect) Given our repeated actions, what does this system seem to be for?
Write short phrases, No debate yet. This diagonal reveals reality.
⚖️ Step 3 ??" Diagonal 2: What should matter? (Normative)
Now ask the other three: Write short phrases, No debate yet. This diagonal reveals aspiration.
⚖️ Step 4 ??" The critical move: compare the diagonals: Now point to the center Execute and ask:
There are only three possible answers: This moment creates dialectical closure, no voting needed .
⚖️ Step 5 ??" One legitimate next step
Ask only one final question: Write one action. (That's it.)
🎯 Those limited set of actions should be defined in a stated problem structure so it is possible to derive proposals in a requirements structure.
By this the suggestions, ideas knowledge share gets Why this works (without explaining theory): Failure modes (useful signals)
feel order

RN-2.3 Purposeful usage of dialectal thoughts

This different path on information processing supporting for governance and informed understandable decisions requires more detailed scape and bounaries to make it more practical. The four areas in better understanding hold:

RN-2.3.1 Underpinning nominal limit in distinctions at a dimension
formally justification why "8" becomes redundant
This question was bothering me seeing the repeating pattern. There is no reference for this found (search engine). Using a LLM to get it understandable explained.
The closure condition at 6- 7 (key step) explained. Across dialectical systems, closure occurs when all logical moments of determination are present.
⚖️ These moments are invariant (names vary):
  1. Grounding - what exists / is given
  2. Differentiation - distinction from others
  3. Relation - mutual influence
  4. Opposition - tension / conflict
  5. Boundary / Negation - limits, breakdown
  6. Reorganization - new pattern
  7. Integration - higher-order coherence
Once all 7 are present, any new distinction must either repeat one of these operations or combine two already present, that is closure. This is the same notion of closure used in: Why an 8th distinction cannot be irreducible (the proof)
Assume an 8th distinction D₈ is irreducible. Then D₈ must perform a cognitive operation not covered by the 7 above. There are only three possibilities:
👁️ Therefore Any proposed 8th distinction is either a recombination, specialization, or rhetorical elaboration of existing ones. QED.
Once grounding, differentiation, relation, opposition, boundary, reorganization, and integration are present, the system of distinctions is closed; any further distinction must be a recombination or contextual specialization, and is therefore redundant at the structural level.

The comparative justification for why ~6-7 distincions
The reasoning for a limited number of distinctions in comparative convergence:
VSM breakdown Cynefin domains Zachman ⇄ Zachman ⇅
System 1
Operations
Clear
Sense- categorize- respond
1 What
(data)
Context
(Scope)
System 2
Coordination / damping
Complicated
Sense- analyze- respond
2 How
(function)
Concept
(Business)
System 3
Internal regulation
Complex
Probe- sense- respond
3 Where
(network)
Logic
(System)
System 3*
Audit / reality check
Chaotic
Act- sense- respond
4 Who
(people)
Technology
System 4
Intelligence / future
Confused
Not knowing which domain
5 When
(time)
Detailed,
(components)
System 5
Identity / policy
Disorder
Transitional ambiguity
6 Which
(motivation)
Functioning
Environment
External complexity
Aporetic boundary Collapse
/ phase shift
7 (Implicit Iteration) (Implicit Iteration)

👁️ Across organizational cybernetics (VSM), sense-making (Cynefin), enterprise architecture (Zachman), and cognitive dialectics (DTF), systems converge on roughly six to seven irreducible distinctions per dimension because that is the minimum articulation required for stable, non-redundant understanding and control of complexity.
textual references in this: 📚 The statement: "Each dimension, when articulated adequately but minimally, needs about 6-7 stable distinctions." does not originate as a design rule in Laske. It is a convergence result across several intellectual traditions that Laske draws together.
Hegel (dialectic constraints) Piaget (epistemic operators) Jaques (Stratum - Cognitive)
Immediate ⇅ Undifferentiated unity Reversibility ⇅ Undoing Declarative ⇅ Facts
Negation ⇅ Differentiation Conservation ⇅ Invariance Procedural ⇅ Processes
Mediation ⇅ Relation Compensation ⇅ Balance Serial ⇅ Sequences
Opposition ⇅ Tension Composition ⇅ Combining Parallel ⇅ Systems
Contradiction ⇅ Instability Negation ⇅ Differentiation Meta-systemic ⇅ Systems of systems
Sublation ⇅ Reorganization Reciprocity ⇅ Mutuality Dialectical ⇅ Contradiction
Totality ⇅ Integration Transformational ⇅ Re-framing identity

Key sources:
  1. Hegelian dialectics (structure of determination)
    Hegel published his first great work, the Phänomenologie des Geistes (1807; The Phenomenology of Mind). This, perhaps the most brilliant and difficult of Hegel's books, describes how the human mind has risen from mere consciousness, through self-consciousness, reason, spirit, and religion, to absolute knowledge..
  2. Piaget / Kegan (constructive-developmental limits)
    Developmental psychology shows that: Piaget's theory of cognitive development, or his genetic epistemology, is a comprehensive theory about the nature and development of human intelligence.
    Kegan described meaning-making as a lifelong activity that begins in early infancy and can evolve in complexity through a series of "evolutionary truces" (or "evolutionary balances") that establish a balance between self and other (in psychological terms), or subject and object (in philosophical terms), or organism and environment (in biological terms). This is not Miller's "7±2" memory claim it is about structural differentiation, not memory load.
  3. Jaques' stratified systems theory
    Elliott Jaques Jaques incorporated his findings during "Glacier investigations" into what was first known as Stratified Systems Theory of requisite organization. This major discovery served as a link between social theory and theory of organizations (strata).
  4. Empirical validation in DTF research
😲 The 7-per-quadrant pattern is empirical, not aesthetic.

Historical source for limited number of distinctions
Another direction of why there is that limitation in number of distinctions.
Asking not a citation chain but a structural genealogy: how the same necessity for articulated distinctions reappears as theories of mind mature. To trace it explicitly and conservatively, showing what is inherited, what is transformed, and why the 6-7 pattern keeps re-emerging.
Core move: Hegel does not enumerate categories arbitrarily.
He shows that thinking generates distinctions until contradiction stabilizes. Key structure (Logic): Hegel's dialectic unfolds through triadic movement, but stability requires more than three moments. Across Being ➡ Essence ➡ Concept we (see table) 7 functional moments, though Hegel never lists them as such.
👉🏾 Crucial point, Hegel discovers that:
Dialectic stabilizes when all necessary moments are present Piaget takes Hegel out of metaphysics and into empirical cognition. Explicit inheritance from Hegel: dialectic becomes equilibration, Contradiction becomes cognitive disequilibrium, sublation becomes re-equilibration. Piaget's key shift into formal Operational Thinking: 👉🏾 Piaget repeatedly finds: The same constraint appears, now empirically grounded Jaques applies Piagetian operations to work, time, and organizations. Jaques' contribution, he discovers that: roles require specific levels of cognitive integration. The critical move, Jaques ties cognitive differentiation to: 👉🏾 Jaques never formalizes "7" as a rule, but: Dialectical capacity becomes operational necessity Laske makes the latent structure explicit, Laske's synthesis integrates: Hegelian dialectic, Piagetian operations, Jaques' strata, Adult development research. He created a DTF structure in four classes, each with 7 thought forms:
Class Function
1 Context Framing
2 Process Change
3 Relationship Interaction
4 Transformation Re-organization
The four classes serve each a different function.
👉🏾 Why 7 thought forms? Because Laske empirically finds that:
Dialectical completeness becomes necessity
These are Hegelian moments, operationalized. The through-line (compressed):
Thinker Contribution What stays invariant
Hegel Dialectic of concepts Necessary moments
Piaget Dialectic of cognition Operational closure
Jaques Dialectic of work Functional sufficiency
Laske Dialectic of thought Explicit minimal set
What persists is not the number but the necessity of a bounded set, 6-7 appears because:
👉🏾 That is the smallest number of distinctions that allows contradiction, mediation, and integration without collapse or redundancy.
Laske is the first to state the constraint explicitly

From Hegel's Logic through Piaget's operations, Jaques' strata, and Laske's DTFs, the recurrence of approximately six to seven distinctions per dimension reflects a deep structural constraint of dialectical cognition: 👁️💡 A good explanation but no verification by others, stated: mentioned nowhere.

RN-2.3.2 Thinking dialectical on how to define "the problem"
Starting with understanding "the problem"
There is an old never mentioned gap. When there is need for change felt it is a problem to state to problem why that needed for changed is felt. "So you want to define "the problem" (LI: John Cutler 2025) The full page is at: The beautifull mess, TBM 396"
🕳️ In product, we're told to "define the problem."
I've always felt that this is hubris, at least with anything beyond fairly contained situations. "Go talk to customers, and figure out what the problem is!" Ultimately, as product builders or interveners, we may choose to take a shot at "solving the problem" with the tools at our disposal. So I guess my intent with this is to get people thinking at multiple levels.
👉🏾 This is not a root cause model.
This is in line with dialectical thinking, the problem definition in sensing what the intention is, context (C), with the goal of able to act on processes(P) by using relationship(R) thoughtforms.

Distinctive capabilities in problem understanding
This can be made part of "The Two-Diagonal Facilitation Move: From Tension to Direction".
"Define the problem" is often hubris in complex situations and there is no single privileged problem definition. The goal should be to act more thoughtfully by looking at the situation from multiple angles.
Customer's mental model/ stated problem
Start with how the customer describes the problem in their own words and suspend judgment
👉🏾 It is their mental model of the problem. This is their story, not ours, no matter how strange it might sound, or how strongly we might feel they are wrong or missing the point.
👉🏾 Even if the framing is misguided, it is still the belief system and narrative currently organizing their understanding of the situation.
👉🏾 If anything is going to change, it is this story and its explanatory power that will ultimately need to be replaced by something more compelling.
Human Factors and behavorial Dynamics
Examine the system forces shaping behavior including incentives norms tools power and constraints. Shifts focus to the environment and the forces acting on people within it.
We intentionally look at the system through multiple lenses, including:
  1. human factors, learning design, behavioral psychology,
  1. anthropology, politics, social practice theory and
  1. power.
The aim is not to find a single cause, but to understand how the system shapes what feels normal, risky, effortful, possible, etc.
Ecosystem view. Other actors perspective
Look at how other people around them experience the same situation and Here we explicitly acknowledge that how one person sees or feels the problem is just one take on the situation. People often inflict their framing of the problem onto others, intentionally or not.
Restated Problem with status quo attempts
Integrate perspectives with history and prior attempts and treat past fixes as useful data.
This is where we start integrating. We take the actors from Layers 1 and 2 and the forces identified in Layer 3, and we add history. We begin restating the problem through this richer lens, knowing full well that we are now converging and imposing a perspective, whether it turns out to be right or wrong.
Feasible influence & Meeded Capabilities
Back to reality, informed by everything we have learned so far. Our understanding of what is possible is shaped by the stories we heard, the perspectives surfaced, the system forces examined, and the history uncovered. (layer 1-4)
This is where we move from understanding to action. Here we form concrete, feasible actions for how we might intervene in the situation. We ask and decide what:
  1. we can try, not in theory, but in practice.
  2. can we realistically influence today?
  3. small actions are feasible?
  4. capabilities that are qualitative missing or quantitively not sufficient
  5. capabilities we need to borrow, buy, or build to support those interventions?
  6. levers are actually within reach?
These choices cannot be made in isolation. They must cohere with prior efforts, align with the incentives and constraints already at play, fit the needs and beliefs of the actors involved, and still connect back to the problem as it was originally described, even if that description now feels distant from where we believe the strongest leverage exists.
Enabling overlap with product/technology
Consider how your product or expertise could realistically influence these dynamics without selling.
We consider our product, expertise, or technology, and how it might influence the situation. The issue is one of opportunity, can we reduce friction or create new pathways? This is hypothesis-building, not pitching.
The aim is better judgment and leverage not a perfect explanation.
Defining an index reference for the problem-state
"The problem" is very generic, in this we have a starting point at any level if there is a start made by stating: "a problem". "DTF-safe" scoring vocabulary for ZARF using the problem state from Cutler is:
Key identity Key thoughts Involved thoughts for information review
?-PTF-1 Customer's mental model/ stated problem What problem does the customer say they have, in their own words?
?-PTF-2 Human Factors and behavorial Dynamics What frictions, incentives, norms, habits, or power dynamics are blocking or reinforcing current behaviors?
?-PTF-3 Ecosystem view. Other actors perspective How do other actors in the customer's environment interpret or feel the impact of this problem?
?-PTF-4 Restated Problem with status quo attempts When we integrate these views and factors, what is the "real problem" , and why have existing fixes or workarounds failed?
?-PTF-5 Feasible influence & Meeded Capabilities What can we realistically influence today, and what additional capabilities would be needed to expand that influence?
?-PTF-6 Enabling overlap with product/technology How does our product, expertise, or technology directly address these dynamics and create better conditions?
?-PTF-7 Transformational realising solutions Re-framing the chosen solution

👁️💡 The pattern is usable as fractal at any level any type of of context.
There are minor adjustments made in Cutlers text. Two sub-fractals, each of them in 6 distinctions, are made better visible. The Key-indentions are enablers for supporting in an information system.
The transformational step is what it initiates to the connected stage of extracting defining sugestions enabling requirements. This is a closure in line with eDIKWv.
RN-2.3.3 The role of certainty in systems, TOC: first order
Anti-buzz hype data understanding limitations
Just asking the LLM to review this: why-data-cannot-be-understood-scientifically (Malcolm Chisholm Oct 16 2025) The text is about how we see "data". Key points:
  1. Data is often assumed to be "scientific"
    • Common belief: because something is labelled "data-driven" it must somehow be aligned with the rigour of the scientific method (hypotheses, measurement, predictable behaviour).
    • In this view, data is treated like a class of things whose individual elements behave according to general laws. (e.g., "all ticks suck blood, so if I see one I know it will do so") 
    • Assumption: experts know how to treat "data" properly, since it is scientific.
  2. But in practice, data often resists that kind of scientific understanding
    • A practical example: a financial-instruments database where each record had an identifier of eight digits. The first three digits appeared random; the remaining five sequential. 
    • Discovered (by talking with "old timers") that originally the identifier was purely sequential, but at one point someone changed the first three digits to a "random" prefix because the storage system had performance issues (all new records were getting physically crowded on a hard drive), that change remained. 
    • The author reflects: the original reason (hard‐drive head wear) is obsolete now; yet the "quirk" remains in the data schema. Data artifacts persist. 
  3. Why this matters
    • Because data is often inherited through migrations, evolutions of systems, and forgotten design choices, the "why" behind particular patterns or structures may be lost. 
    • Result: we cannot simply "inspect" current data and assume it behaves according to some neat scientific laws. Features may be historical, accidental, ad-hoc fix, legacy artefacts. 
    • Argument: this undermines the idea that data can always be treated via a purely scientific approach, because the context, history, and idiosyncrasies matter. 
    • The warning, consequences": slower adaptability, additional effort, "sclerosis" in organizations that rely on old data but cannot fully reinterpret or clean it. 
  4. Take-away
    • The modern prejudice that everything must be understood scientifically (i.e., via general laws, predictable behaviour, standardised models) doesn't always apply to data.
    • Practically: data management must account for history, context, design decisions, migrations, legacy systems,not just treat data as "scientific stuff" that behaves uniformly.
    • The author implies that acknowledging this gap is important for realistic data strategies.

Certainty uncertainty in the theory of constraints
The theory of constraints (TOC) is focussing on a single issue that is holding op the system. This classic Theory of Constraints (TOC) thinking assumes a predictable system in the way of a pendulum.:
First-order pendulum characteristics
The system has one dominant degree of freedom Focus on the constraint.
Variability is treated as noise around a stable center Act decisively on the best current model.
The observer is outside the system Learn from system feedback.

Even when they acknowledge learning and adaptation, the structure of causality remains linear: This is a single-loop learning architecture. The pendulum swings, but the pivot point is fixed.
👉🏾 The problem lives in the uncertain world, the task is to act despite it.
The reality of complex system is far more unpredictable like a double pendulum set under high stress.
Decisions in a simple order: What how where who and when the last one is more interesting ... which!
The Logical Thinking Process: A Systems Approach to Complex Problem Solving a review by Chet Richards. (2007), TOC amd what is in a LI post.
The thinking processes in Eliyahu M. Goldratt's theory of constraints are the five methods to enable the focused improvement of any cognitive system (especially business systems). ... Some observers note that these processes are not fundamentally very different from some other management change models such as PDCA "plan-do-check-act" (aka "plan-do-study-act") or "survey-assess-decide-implement-evaluate", but the way they can be used is clearer and more straightforward.
A review of the work of Dettmer. Dettmer begins the chapter by sketching the basic principles of human behavior, but there's a limit to what he can do in a couple of dozen pages or so. People do get Ph.D.s in this subject. So regard it as more of a brief survey of the field for those lab rats from the engineering school who skipped the Psych electives.
Then he does a very unusual thing for a technical text. He introduces John Boyd's "Principles of the Blitzkrieg" (POB) as a way to get competence and full commitment, "even if you're not there to guide or direct them" (p. 8-11). Which means that people have to take the initiative to seek out and solve problems, using the common GTOC framework to harmonize their efforts.

Certainty uncertainty in the theory of constraints
An LI article on TOC is claiming TOC felt as being incomplete but the question is what that is. The Illusion of Certainty (LI: Eli Schragenheim Bill Dettmer 2025)
When there is no way to delay a decision, the clear choice is to choose the course that seems safer, regardless of the potential gain that might have been achieved. In other words, when evaluating new initiatives and business opportunities, the personal fear of negatives results, including those with very limited real damage to the organization, often produces too conservative a strategy. Ironically, this might actually open the door to new threats to the organization. However, uncertainty often permeates every detail in the plan, forcing the employees in charge of the execution to re-evaluate the situation and introduce changes. By confronting uncertainty, both during planning and execution, the odds of achieving all, or most, of the key objectives of the original plan increase substantially.
Living with uncertainty can create fear and tension. This can drive people to a couple of behaviors that can result in considerable "unpleasantness." When managers, executives, and even lower-level supervisors assess the organizational decisions they must make, they have two very different concerns. Actually, in most real-world cases the net impact of a particular move on the bottom line is not straightforward.
  1. In fact, determining the net contribution of just one decision, when so many other factors influenced the outcome, is open to debate and manipulation.
  2. It's easy to see this kind of after-the-fact judgment as unfair criticism, especially when it ignores the uncertainty at the time the decision was made.
  3. In most organizations leaders evaluate the performance of individual employees, including managers and executives. This practice is deeply embedded within the underlying culture of most organizations.
What motivates this need for personal assessment? In order to assess personal performance, management typically defines specific "targets" that employees are expected to achieve. This use of such personal performance measurements motivates employees to try to set targets low enough so that, even in the face of situational variation, they'll be confident that they can meet these targets. In practicality, this means that while targets are met most of the time, only seldom they are outperformed, lest top management set higher targets. (Today's exceptional performance becomes tomorrow's standard.)
In practice, this culture of distrust and judgment-after-the-fact produces an organizational tendency to ignore uncertainty. Why? Because it becomes difficult, if not impossible, to judge how good (or lackluster) an employee's true performance is.
The analysis: Schragenheim & Dettmer argue that uncertainty is unavoidable, but that paralysis in the face of uncertainty is a choice. Their core claims: Crucially, uncertainty is treated as an external condition that the decision-maker must cope with.
TOC optimizes for: A true double-pendulum
Operational clarity Weakens managerial authority.
Actionability Delays commitmentt current model.
Managerial decisiveness Requires reflexive leadership capacit.
The issue:
Why TOC tends to stay first-order, is not a mistake, it is a design choice.
Schragenheim & Dettmer are firmly within strategic rationality, even when they talk about learning and adjustment. Even when they warn against after-the-fact blame, the logic remains: "A good decision is one that increases the likelihood of success.". This is teleological rationality, not discursive validity. Habermas: "this is means- ends rationality under uncertainty" and "The lifeworld assumptions are taken for granted."
RN-2.3.4 The role of certainty in systems, SD: second order
Uncertainty shifts from environment ➡ interpretation Instead of: "We lack information" It becomes: "We lack shared understanding of what matters". The problem becomes discursive, not operational.
A double pendulum is not just "more uncertainty", but a qualitative change in system behavior:
First-order pendulum characteristics Double pendulum characteristics
How uncertainty is framed Aspect Aspect
Incomplete information Uncertainty is external Uncertainty is co-produced
The environment / future Problem location is stable Problem location shifts
Actor responding to reality Actor responds to system Actor is part of system
Feedback and adjustment Learning corrects action Learning redefines framing
The system itself is intelligible Constraint is "out there" Constraint may be epistemic

A double-pendulum model would ask: This is second-order observation (Laske, Luhmann, von Foerster).
👉🏾 The problem lives in the interaction between interpretation, power, and action.
Under communicative action: This is the double pendulum: One arm = action, Second arm = interpretation legitimacy.
Habermas' four validity claims become central:
Claim Question
Truth Plausible understanding of reality?
Rightness Acceptable to those affected?
Sincerity Are we honest about uncertainty?
Comprehensibility Do we understand each other?
Issue: None of these are operational metrics, they destabilize "decisiveness", expose power asymmetries:
⚠️❗ A missing level for more certainty. Organizations stabilize uncertainty by privileging strategic action (Habermas) and work (Arendt) at cognitive levels (Laske C3- C4) that cannot tolerate the reflexive instability introduced by communicative action and action proper, thereby collapsing the second pendulum of meaning, legitimacy, and emergence. The real constraint is not uncertainty, it is developmental capacity under authority. Until that is acknowledged: The next option is using system dynamics (SD): shifting what is perceived in uncertainty.

BI life

RN-2.4 Becoming of identities transformational relations

In this dialectal path on information processing supporting for governance and informed understandable decisions the identity of persons group of persons and organisations will have to change. The classical hierarchical power over persons is outdated an has become a blocking factor.

RN-2.4.1 Communities of practice - collective intelligence
"Communities of practice" theoretical
Alignment of the DTF Framework summary using a LLM. It is far beyond the personal human comfort zone but helpful in reflection and finding the references for trustful sources.
Started with the communities of practice CoP of the EU CoP JRC it bypassed Wenger. Using a Book Review
1 Domain what the community is about
2 Community social fabric and mutual engagement
3 Practice shared repertoire of doing
4 Identity / Learning becoming through participation

Wenger's mature CoP theory (1998-2010) rests on four pillars:

Wenger CoP strcuture And three learning modes: This already tells us something important: Wenger is not describing a social structure, he is describing a meaning-producing system over time. That places him squarely in dialectical territory, even if he never uses the word.
Participation Reification
Local practice Global alignment
Experience Competence
Identity Community


Intelligence, learning, DTF Alignment to 6x6 and others
The evaluations of Jabes after the made connection to Laske. Using the Reference-frame approach to systems thinking combining Lean principles, the Zachman Framework, and systemic complexity. The idea is that to manage complexity, one must see multiple interdependent dimensions, not just a single linear process. It is meta-structural systems thinking, the same territory Laske calls dialectical. It is not a conventional article DTF Laske Dialectical Thought Form Framework (DTF) is aimed at understanding and nurturing reasoning complexity: how people structure thought as they handle context, change, contradiction, and transformation. DTF has four categories, each containing 7 thought forms. Each class captures a way of thinking , from seeing events in relation to conditions, diagnosing interdependencies, and dealing with contradictions, to achieving integrative transformation.
SIAR -DTF 6x6 Theme 6x6 Systems/Lean/Zachman Description
Sense -
Context
(C)
Context framing & constraints Many parts of the page focus on systems boundaries, contexts for knowledge and roles. DTF C forms help analyze situating problems in context.
Act-
Process
(P)
Value stream & iterative cycles (e.g., PDCA, SIAR) Lean emphasizes sequences, cycles, flow, stability , aligning with P's focus on temporal and unfolding structures.
Interpret -
Relationship
(R)
Interdependencies & roles within system subsystems The 6*6 cells and fractal structure metaphor highlight relations and co-dependencies, aligning with R's structural focus.
Reflect -
Transformation
(T)
Dualities & fractal integration
(backend - front end)
Here the document grapples with contradictions and integration across scales, which DTF's T forms capture , the move toward meta-levels of meaning.

The "Reflect" phase is not: "Did it work?" It is: "What needs to be re-framed, repositioned, or re-architected?"
The 6*6 framework and DTF overlap structurally, not conceptually, they do different jobs: DTF ➡ describes how people think Your 6*6 / SIAR framing ➡ describes how systems should be designed and navigated What DTF, DTF is diagnostic, has that your page does not aim to do Assess individual cognitive development Distinguish developmental levels Score or profile reasoning complexityBut the structure of movement is the same. What 6*6 framework, is generative, has that DTF does not Normative design intent Architectural completeness Operational guidance for enterprise/system design They are complementary, not redundant. The SIAR 6*6 model operationalizes dialectical thinking at the system-design level, while DTF explicates the cognitive forms required to meaningfully operate such a model.
RN-2.4.2 The challenge in building up relationships
Lencioni model dysfunctions of a Team
Interpreatation of the understanding the Lencioni Model (k.Gowans) and (bitsize who?)
Whether you're running a team or simply a part of one, we hope you'll find our summary of Patrick Lencioni's insightful teamwork concept, "The Five Dysfunctions of a Team" useful. Lencioni uses a classic pyramid to explain the five main problems teams face.
In line to: In any team, performance ebbs and flows. But when results start slipping, it's essential to understand why rather than just push harder. The Lencioni Model provides a simple yet powerful framework to help you diagnose issues at their root and take meaningful action.
The lenocide pyramide One of the used figures, see right side.

There is a notion of the issues but a clear dialectual connection is missing.
Reframing the Lencioni pyramid using signals:
negative signals relationship positive signals
1 (-) ⇄absence of trust-ethics
trust-ethics one another ⇆
Safe to speak up
2 (-) Openess in unclear honest
3 (-) Collaboration
4 no ask for help when needed (-)
5 Guardeness (-)
6 Conceal weakness (-)
7 draid meetings (-)
8 team member avoidance (-)
.
1 Problems, issues avoidance ⇄fear of conflict
conflict for growth ⇆
Confront problems, issues quickly
2 Lack of transparency (-)
3 confusion (-)
4 (-) Openess-honest, candour
5 (-) practical solutions
6 (-) minimal policies
7 (-) feedback, reflect & adapt
.
1 Ambiguous direction ⇄lack of commitment
commitment of team ⇆
Clear directions
2 Unclear priorities Clear on set priorities
3 Hesitancy (-)
4 Absenteism (-)
5 Repetition same discussions Shared on common objectives
6 No autononmy autonomous activities
7 (-) power tot the edge decisions
.
1 Poor performance tolerated ⇄avoidance of accountability
accountability taken ⇆
Poor performers held accountable
2 Missed deadlines, deliveries (-)
3 environment of resentment Same standard apply to everyone
4 Flakiness Accepting responsibilities
5 micro management Delegated respsonsibilitie
6 Blame culture Accepting mistakes happen
7 (-) Resource provisioning with authority
.
1 High team turnover ⇄inattention to results
results are focus ⇆
Motivated & engaged team
2 Excuse on, changing metrics (-)
3 Status game collective success
4 (-) gradually increase complexity
.
1 system performance fails ⇄inattention to service outcome
service outcomeis focus ⇆
system performance gains

Start at building trust:
Trust is the foundation of teamwork. Teams who lack trust conceal weaknesses and mistakes, are reluctant to ask for help, and jump to conclusions about the intentions of other team members. It is crucial to establish a team culture where individuals feel able to admit to mistakes and weaknesses, and use them as opportunities for development.
Acceptance of frictions:
When teams do not engage in open discussion due to a fear of conflict, team members often feel that their ideas and opinions are not vlued. They may become detached or even resentful, and fail to commit to the chosen approach or common goal as a result.
Fear of conflict: The desire to keep the peace stifles productive conflict within the team.
Shared goal committment:
Do team members clearly understand how their work contributes to the bigger picture?
Lack of commitment - The lack of clarity and/or buy-in prevents team members from making decisions they will stick to.
Accountablity:
Hold yourself accountable, and expect the same from your team. This can help foster a culture of responsibility and accountability.
The results:
Pursuing individual goals and personal status distracts the team's focus from collective results.
Is it imaginable people on theteam making a reasonable personal sacrifice if it helped the larger team?

The Dod Strategy statement knowledge management: data
The Lencioni model is is frustrating the idea is clear but the signals to recognize for that still not after using those two sources.
Adding another source to this: " Best teams , Creating and Maintaining High- Performing Teams", By Marc Woods.
Three crucial elements of empowered people, defined processes and a supportive culture, the truth is that these three elements are deeply intertwined. The book is a good read although lengthy the tone setting is positive while mentioning the negatives.

Uncertainties in managing flows
Stratum Cognitive capacity
1 Customer's mental model/ stated problem What problem does the customer say they have, in their own words?
2 Ecosystem view. Other actors perspective How do other actors in the customer's environment interpret or feel the impact of this problem?
3 Humand Factors and behavorial Dynamics What frictions, incentives, norms, habits, or power dynamics are blocking or reinforcing current behaviors?
4 Restated Problem with status quo attempts When we integrate these views and factors, what is the "real problem" , and why have existing fixes or workarounds failed?
5 Enabling overlap with product/technology How does our product, expertise, or technology directly address these dynamics and create better conditions?
6 Feasible influence & Meeded Capabilities What can we realistically influence today, and what additional capabilities would be needed to expand that influence?
7 Transformational Re-framing the chosen solution


butics
A typical example of ignoring uncertainty is widespread reliance on single-number discrete forecasts of future sales. Any rational forecast should include not just the quantitative average (a single number), but also a reasonable deviation from that number. The fact that most organizations use just single-number forecasts is evidence of the illusion of certainty.
Organizations typically plan for long-term objectives as well as for the short-term. A plan requires many individual decisions regarding different stages, inputs or ingredients. All such decisions together are expected to lead to the achievement of the objective. But uncertainty typically crops up in the execution of every detail in the plan. This forces the employees in charge of the execution to re-evaluate the situation and introduce changes, which may well impact the timely and quality of the desired objective.
What motivates people to make the decisions that they do? Many readers will be familiar with Abraham Maslow's hierarchy of needs. Maslow theorized that humans have needs that they strive to satisfy. Further, Maslow suggested that it's unsatisfied needs that motivate people to action. Maslow also suggested that human needs are hierarchical. This means that satisfying needs lower in the hierarchy pyramid captures a person's attention until they are largely (though not necessarily completely) satisfied. At that point, the these lower level needs become less of a motivator than unsatisfied higher level needs. The person in question will then bend most of his or her efforts to fulfilling those needs.

RN-2.4.3 A practical case for understanding DTF impact
The Dod Strategy statement knowledge management: data
DoD data strategy (2020) Problem Statement
DoD must accelerate its progress towards becoming a data-centric1 organization. DoD has lacked the enterprise data management to ensure that trusted, critical data is widely available to or accessible by mission commanders, warfighters, decision-makers, and mission partners in a real time, useable, secure, and linked manner. This limits data-driven decisions and insights, which hinders the execution of swift and appropriate action.
Additionally, DoD software and hardware systems must be designed, procured, tested, upgraded, operated, and sustained with data interoperability as a key requirement. All too often these gaps are bridged with unnecessary human-machine interfaces that introduce complexity, delay, and increased risk of error. This constrains the Department's ability to operate against threats at machine speed across all domains.
DoD also must improve skills in data fields necessary for effective data management. The Department must broaden efforts to assess our current talent, recruit new data experts, and retain our developing force while establishing policies to ensure that data talent is cultivated. We must also spend the time to increase the data acumen resident across the workforce and find optimal ways to promote a culture of data awareness.
The Department leverages eight guiding principles to influence the goals, objectives, and essential capabilities in this strategy. These guiding principles are foundational to all data efforts within DoD.
... Conclusion: Data underpins digital modernization and is increasingly the fuel of every DoD process, algorithm, and weapon system. The DoD Data Strategy describes an ambitious approach for transforming the Department into a data-driven organization. This requires strong and effective data management coupled with close partnerships with users, particularly warfighters. Every leader must treat data as a weapon system, stewarding data throughout its lifecycle and ensuring it is made available to others. The Department must provide its personnel with the modern data skills and tools to preserve U.S. military advantage in day-to-day competition and ensure that they can prevail in conflict.
4 Essential Capabilities necessary to enable all goals:
Stratum Cognitive capacity
1 Architecture DoD architecture, enabled by enterprise cloud and other technologies, must allow pivoting on data more rapidly than adversaries are able to adapt.
2 Standards DoD employs a family of standards that include not only commonly recognized approaches for the management and utilization of data assets, but also proven and successful methods for representing and sharing data.
3 Governance DoD data governance provides the principles, policies, processes, frameworks, tools, metrics, and oversight required to effectively manage data at all levels, from creation to disposition.
4 Talent and Culture DoD workforce (Service Members, Civilians, and Contractors at every echelon) will be increasingly empowered to work with data, make data-informed decisions, create evidence-based policies, and implement effectual processes.

This resonance with: The key-words: processes, frameworks, tools, metrics are bound to process (P) but mentioned at governance.
7 Goals (aka, VAULTIS) we must achieve to become a data-centric, DoD data will be:
Goals information capability
1 Visible Consumers can locate the needed data.
2 Accessible Consumers can retrieve the data.
3 Understandable Consumers can find descriptions of data to recognize the content, context, and applicability.
4 Linked Consumers can exploit complementary data elements through innate relationships.
5 Trustworthy Consumers can be confident in all aspects of data for decision-making.
6 Secure Consumers know that data is protected from unauthorized use and manipulation.
7 Interoperable Consumers and producers have a common representation and comprehension of data.

Make Data Secure As per the DoD Cyber Risk Reduction Strategy, protecting DoD data while at rest, in motion, and in use (within applications, with analytics, etc.) is a minimum barrier to entry for future combat and weapon systems. Using a disciplined approach to data protection, such as attribute-based access control, across the enterprise allows DoD to maximize the use of data while, at the same time, employing the most stringent security standards to protect the American people. DoD will know it has made progress toward making data secure when:
Objective information Safety
1 Platform access control Granular privilege management (identity, attributes, permissions, etc.) is implemented to govern the access to, use of, and disposition of data.
2 BIA&CIA PDCA cycle Data stewards regularly assess classification criteria and test compliance to prevent security issues resulting from data aggregation.
3 best/good practices DoD implements approved standards for security markings, handling restrictions, and records management.
4 retention policies Classification and control markings are defined and implemented; content and record retention rules are developed and implemented.
5 continuity, availablity DoD implements data loss prevention technology to prevent unintended release and disclosure of data.
6 application access control Only authorized users are able to access and share data.
7 information integrity control Access and handling restriction metadata are bound to data in an immutable manner.
8 information confidentiality Access, use, and disposition of data are fully audited.


RN-2.4.4 Info
architecture-development-common-mistakes (LI: tarun-singh 2025) Problem Statement
Most architecture failures don't happen suddenly. They happen quietly??"through a series of reasonable decisions that compound over time. So what is common mistake and what to change:
  1. Treating Architecture as Documentation
    Documentation as delivery is reactive, change that to proactive using it in communication for helping in decisions.
    -> You need a well defined knowledge management system
  2. Starting with Technology Instead of Business
    It is the reaction on what is known before understanding the unknowns.
    Indeed technology should follow intent, not drive it.
  3. Designing Applications Instead of Capabilities
    Set known affordances before capabilities. The affordance is about what is in bounds for what is possible. Training and experience is to get solved. Capabilities is what is already known and trained (reactive).
  4. Assuming Change Is an Exception
    Change with uncertainties is the certainty.
  5. Treating Non-Functionals as "Later Work"
    Performance, security, resilience, cost, and compliance are architectural decisions indispensable part of the application requirements.
    They are not just a technology question but organisational accountable
  6. Optimizing for Cleverness Over Clarity
    It is clarity, boundaries what is simple in knowledge at a moment.
    When knowledge changes, boundaries changes, what is simple likely will change
Those first 6 are a nice distinct set of thought to set. To continue with the others they are different not less important.
  1. Ignoring Team and Ownership Boundaries
    Systems are around a set of defined activities.
    Teams will work the best when following the systems boundaries.
    The classic hierarchical organisation only is functional for the system if that is following the system boundaries. A disconnected way of C&C is a threat not a capability.
  2. Over-Centralizing Architectural Control
    C&C can be seen in 4 levels: autonomy, guided, strict, regulated (external). The should all be in place in the system of the organisation
  3. Letting Architecture Go Stale
    Stability without evolution is decay. (sic)
  4. Measuring Architecture by Diagrams, Not Outcomes
    It is at any system were the measurement becomes the goal the desired outcome will be lost. So we have to define the outcome clearly. A well defined "stated problem" as evolving (changing) and continuous evaluated knowledge item is closing the loop. Only written with a perspective what can be done instead of seeing what is going wrong.

Redefining leadership
Redefining Strategy for a World in Motion. (LI: Timothy Timur Tiryaki 2025) Problem Statement
Servant leadership is a philosophy first defined by Robert K. Greenleaf in 1970 in his essay The Servant as Leader. This approach flips the traditional, hierarchical view that employees serve leaders, advocating instead for leaders to serve their employees. It builds people-focused organizations and reminds us to be humble, act with care, and lead with humility.
Dr. Jim Laub's research identifies six essential behaviors that guide leaders in prioritizing serving others to create trust, engagement, and productivity:
  1. Demonstrating Authenticity: Show up with integrity, trustworthiness, and openness, leading from both the heart and mind.
  2. Growing Themselves and Others: Focus on continuous learning and help employees reach their potential through coaching and development.
  3. Valuing People: Build trust by respecting team members' abilities and listening without judgment, fostering a safe, engaging environment.
  4. Building Community: Create a collaborative culture where everyone feels they belong and can contribute to a shared vision.
  5. Providing Direction: Use foresight and clear guidance to align the team with goals and ensure clarity on the path forward.
  6. Sharing Power: Empower others to lead, encouraging autonomy and fostering leadership at every level of the organization.
Examples of Servant Leadership in Action These examples show that servant leadership is not only about building trust and engagement but also about unlocking the full potential of individuals and teams by fostering an environment where everyone can thrive.
But here's the real question: How do we shift from theory to action in our own leadership? What's one step leaders can take today to empower and uplift their teams?
Information processing Architecture.
Complexity and Information organisational mismatch The Architecture of Illusion (LI: Bree HatchardBree Hatchard 2025)
Why Enterprise Architecture is Dead: We are simply buying insurance policies against being blamed for the past.

Information processing Architecture.
Another post about the EA problems. Twelve red Flags EA time to climb out of your ivory tower (LI: Peter Rus 2026) The post is mentioning a lot but not very clear in the intention. 12 topics suggests two sets of six. Asking to do an analyses for both (from the two LinkedIn posts) is surprising.
EA's current crisis, Core diagnosis, EA today oscillates between Illusion-maintaining representations and Operational irrelevance. It produces maps without territory, governance without authority, and abstraction without agency. The "two hidden sets of six" (implicit), across both posts, we can reconstruct two orthogonal EA modes:
Set A - control-facing Set B - sense-making-facing
Illusionary EA Emergent EA
Static models Completeness 1⇄ Incompleteness Living structures
Retrospective rationalization Standardization 2⇅ Context Decision-context awareness
Tool-driven abstraction Predictability 3⇅ Judgment Trade-off visibility
Governance theater Reuse 4⇄ Evolution Capability evolution
Stakeholder appeasement Control 5⇅ Power awareness Power-aware design
Post-hoc coherence Abstraction 6⇄ Sense-making Situated judgment

A human reaction: A sharp framing. The first set optimizes stability after commitment, the second only works when authority exists before commitment. Most EA teams are asked to deliver rupture-capable outcomes while being structurally positioned in illusion-preserving decision flows. That mismatch, not capability, is what keeps the pattern repeating.
Continuations LLM: EA is stuck pretending to be Set B while structurally operating as Set A. This is not a maturity ladder, It is a structural incompatibility. Because EA is being asked to do two incompatible things: be one that closes meaning vs behave so in a way that opens meaning. This produces the illusion of definition without function, EA responds by: EA's real function is that it must not close. Its job is not to define the enterprise, but to: Enterprise Architecture is the disciplined construction of containers that are strong enough to hold meaning, yet weak enough to let contradiction speak.
feel order

RN-2.5 Closing the loop using dialectical thinking

This different path on information processing supporting for governance and informed understandable decisions is using the reflection (R) intensive although it is never mentioned as dialectal thoughtform. Reflection is the closed loop that drives chage & transformations but there are challlenges.

RN-2.5.1 DTF Alignment to the 6x6 reference frame & Jabes
Intuitive low boundary reactions to system complexity

Scope setting DTF "the problem" fractal
It is bout analysis int the developmental affordances and constraints of an artifact (text, framework, strategy, narrative) using DTF-informed lenses. The key shift, These differences are important by this essence:
👉🏾 Affordances are possibilities in the world, whereas capabilities capacity are the power to act on those possibilities, with the best outcomes happening when affordances and capabilities align.
Feature Affordance Capacity / Capability
Source External: Resides in the relationship between the object and the user. Internal: Resides within the user (physical or cognitive).
Nature Relational: It only exists if the agent's capacity matches the object's properties. Absolute/Individual: It defines the boundaries of what an individual can do.
Example A flight of stairs affords climbing to a healthy adult but does not for a crawling infant. An adult has the capacity to lift 50 lbs; an infant does not.

How They Interact It is to analyze what kinds of meaning-making this artifact enables, presupposes, or suppresses.
In stating "the problem" change can get a chance. How this becomes "DTF-safe scoring"? Instead of numbers or stages, use ordinal or qualitative markers: Examples: Example statement:
This can be used as a knowledge containers in Jabes in two types: the problem description and the DTF scoring of the descriptions. The pattern is usable as fractal at any level and any type of context because each ?-PTF is structural, not content-specific. Each can be applied to: The triggered propagations by decisions is what enables activities over all levels. The Decision Choice Value Evaluation (DCVE) items:
BI life

RN-2.6 Evaluating system dialectical thinking

This different path on information processing is an emerging perspective for governance and informed understandable decisions. This started with a simplified understanding of a pull push mindset the becoming of the Siar model.

RN-2.6.1 What legitimately can be done with DTF using texts
Why Laske-style scoring of text usually fails
Working with prompts using chatgpt for reflection the ideas a brutal one was done. Asking for a possible contact working on this and why there is a difference for insight and experiences. The surprising reply of chatgpt is what the rest of the paragraph is about.
I don't conduct certified Constructive Developmental Interviews (CDI), nor do I generate clinically valid CD scores. What I do work with is: That distinction explains exactly why text scoring "didn't succeed" in Jan's work.
Laske's method is not text analysis, CD scoring relies on:
Key point: Laske does not score what people say, he scores how they make meaning when their thinking is stretched. Most documents: So the raw developmental signal is filtered out, texts collapse subject-object distinctions. In CDI The interviewer actively pushes: A document: So: Subject-object movement becomes unobservable, you get position statements, not meaning-making in action. This is why Jan says it didn't succeed.
🚧 The real issue: confusing three different things, most failed attempts conflated:
Thing What it actually is
CDI scoring Clinical developmental assessment
DTF diagnosis Systemic developmental patterning
Textual signal reading Interpretive structural analysis
See table right side.
Laske only validates the first,
DTF uses Laske, but is not limited to CDI.

Jan's attempts likely failed because: The object of scoring was wrong, or the claims exceeded what text analysis can support. Where Jan is correct, 100%: 😎 🧠 Where there is room, legitimate space.
CDI / Laske Zarf cynefin Jabes
Scores persons Scores artifacts / systems
Requires elicitation Uses produced language
Looks for transitions Looks for ceilings and defaults
Psychometric Structural / architectural
Not trying to: Diagnose individuals, replace CDI, do psychometrics.
👉🏾This is the crux (see table).
Zarf Jabes is DTF-level work, not CDI work.
The key difference: who/what is being scored?

You (Zarf Jabes) are trying to:
What can be done with texts (legitimately)
Examples:
What order of meaning is assumed?
What contradictions are invisible?
What kinds of agency are permitted?
What kinds of conflict are excluded?
Here is the precise, defensible move: This yields: That is exactly what you've been doing intuitively.
Why my earlier analysis is still valid (but limited).
When I said: "late S3 ➡ early S4" That was a heuristic placement of the artifact's affordance, not a clinical score. If I were working under Jan's standards, I would phrase it as:
😎 🧠 That phrasing matters, Why this matters for ZARF and your work, Your ZARF framework is: It does not need CDI validity, what it needs is: Consistent lenses, Explicit limits, clear separation between persons and systems. DTF fits that if used correctly.
Those are adjacent but not identical pursuits, bottom line: there is no contradiction between Jan being right and your approach being valid.

RN-2.6.2 Using a mindset with graphs in understanding thought forms
From Knowledge to Graphs and Back Again
A difficult dialectal word: ontology. This needs more precise explanation. From Graphs Back to Meaning: Why Ontology Is Not a Phase in the Stack (LI: J.Bittner 2025)
The Year of the Graph newsletter published "The Ontology Issue: From Knowledge to Graphs and Back Again." The instinct behind that piece is right. The field is finally confronting the limits of connectivity without meaning. But there is a category error we still need to correct.
Ontology is not something systems move away from and later rediscover. It is not a layer added once graphs get large enough or AI systems get sophisticated enough. That distinction is not academic, it has direct ROI implications. When meaning is left implicit, organizations pay for it later through: Ontology does not make systems faster on day one, it makes them stable under change. It enables: Meaning does not emerge from structure alone. Meaning comes from commitment. If your systems are scaling faster than their assumptions, this distinction matters.
An ontology (html at: yearofthegraph.xyz) is an explicit specification of a conceptualization which is, in turn, the objects, concepts, and other entities that are presumed to exist in some area of interest and the relationships that hold among them.
Ontology introduces the semantic foundation that connects people, processes, systems, actions, rules and data into a unified ontology [sic]. By binding real-world data to these ontologies, raw tables and events are elevated into rich business entities and relationships, giving people and AI a higher-level, structured view of the business to think, reason, and act with confidence.
Just as you wouldn't bring half your brain to work, enterprises shouldn't bring half of artificial intelligence's capabilities to their architectures. Neuro-symbolic AI combines neural-network technology like LLMs with symbolic technology like knowledge graphs. This integration, also known as "knowledge-driven AI", delivers significant advantages: If you're not exploring how knowledge graphs and symbolic AI can augment your organization's intelligence, both artificial and actual, now is a good time to start.
Reverting the changing intention into the opposite
Real change is hard. An article explains the why: "How Every Disruptive Movement Hardens Into the Orthodoxy It Opposed." in a Pattern That Keeps Repeating (LI: S.Wolpher 2025)
The arc in religions as similarity.
In 1517, Martin Luther nailed his 95 theses to a church door to protest the sale of salvation. The Catholic Church had turned faith into a transaction: Pay for indulgences, reduce your time in purgatory. Luther's message was plain: You could be saved through faith alone, you didn't need the church to interpret scripture for you, and every believer could approach God directly.
By 1555, Lutheranism had its own hierarchy, orthodoxy, and ways of deciding who was in and who was out. In other words, the reformation became a church. Every disruptive movement tends to follow the same arc, and the Agile Manifesto is no exception.
The Agile Arc
Let us recap how we got here and map the pattern onto what we do: The Manifesto warned against the inversion: "Individuals and interactions over processes and tools." The industry flipped it. Processes and tools became the product. Some say they came to do good and did well. I'm part of this system. I teach Scrum classes, a node in the network that sustains the structure. If you're reading this article, you're probably somewhere in that network too.
That's not an accusation. It's an observation. We're all inside the church now.
Why This Happens
A one-page manifesto doesn't support an industry. But you can build all of that around frameworks, roles, artifacts, and events. (Complicated, yet structured systems with a delivery promise are also easier to sell, budget, and measure than "trust your people that they will figure out how to do this.")
Simplicity is bad for business. I know, nobody wants to hear that.
Can the Pattern Be Reversed?
At the industry level, this probably won't be fixed. The incentives are entrenched. But at the team level? At the organization level? You can choose differently.
You can practice the principles without the apparatus. You can ask, "Does this help us solve customer problems?" instead of "Is this proper Scrum?" You can treat frameworks as tools, not religions.
Can you refuse to become a priest while working inside the church? I want to think so. I try to, and some days I do better than others.

The resistance to change optimizing work in Lean context
The Myth of Early Buy-In for TPS (LI: K.Kohls 2025) This paper examines documented resistance to TPS during its formative years, the role of Taiichi Ohno in enforcing behavioral change prior to belief, and the implications for contemporary Continuous Improvement (CI) implementations.
The evidence suggests that TPS did not succeed because of early buy-in or cultural alignment, but because leadership tolerated prolonged discomfort until new habits formed and results compelled belief. The phase shift idea in the Cynefin framework is a similarity.
  1. The myth of harmony by culture The Toyota Production System (TPS) is frequently portrayed as a harmonious, culture-driven system that emerged naturally from organizational values. This narrative obscures the historical reality. Primary and secondary sources reveal that it was introduced amid significant internal resistance, managerial conflict, and repeated challenges to its legitimacy.
  1. The Retrospective Fallacy of TPS From the perspective of frontline supervisors and middle managers, inventory functioned as psychological and political protection. Removing it threatened identity, status, and perceived competence. Resistance was therefore not irrational; it was adaptive within the existing reward structure.
  1. Conditions of Constraint Rather Than Enlightenment Existential challenges: limited capital, unstable demand, poor equipment reliability, and an inability to exploit economies of scale. These constraints forced Toyota to pursue alternatives to Western mass production models, not out of philosophical preference, but necessity.
  1. Central Conflict: Visibility Versus Safety The Andon system, now widely cited as a symbol of "respect for people", was initially experienced as a source of fear rather than empowerment. Supervisors, accustomed to being evaluated on output volume and equipment utilization, frequently discouraged Andon pulls, implicitly or explicitly. Psychological safety, therefore, was not a prerequisite for Andon; it was an outcome that emerged only after repeated cycles of visible problem resolution.
Historical studies demonstrate that TPS adoption was neither uniform nor immediate.
  1. Uneven Adoption and Internal Workarounds Fujimoto's longitudinal analysis shows that early TPS practices were localized, inconsistently applied, and often circumvented by managers seeking to preserve traditional performance metrics.
    Cusumano further documents periods during which TPS was questioned internally, particularly when short-term performance declined. In several instances, Toyota leadership faced pressure to revert to more conventional production approaches. TPS persisted not because it was universally accepted, but because senior leadership tolerated internal conflict long enough for operational advantages to become undeniable.
  1. Enforcement Before Understanding Steven Spear reframes TPS not as a cultural system but as a problem-exposing architecture that forces learning through repeated action. Importantly, Spear emphasizes that many TPS behaviors were enforced before they were fully understood or emotionally accepted.
    John Shook's firsthand account corroborates this view, noting that Toyota managers learned TPS "by doing," often experiencing frustration and discomfort before developing deeper understanding. Respect, in this framing, was earned through consistent support during failure, not granted through initial trust.
  1. Implications for Contemporary CI Implementations The historical record suggests that TPS succeeded not by avoiding these dynamics, but by enduring them. Behavior preceded belief; habit preceded culture.
    Modern CI efforts frequently fail for reasons that closely mirror early TPS resistance:
    • An expectation of buy-in prior to behavioral change
    • Aversion to short-term performance dips
    • Avoidance of discomfort in the name of engagement
    • Overreliance on persuasion rather than structural reinforcement
  1. This history carries a sobering implication : Organizations seeking TPS-like results without TPS-level tolerance for discomfort are attempting to reap outcomes without enduring the process that created them. Ohno's legacy lies not in tool design alone, but in his willingness, and Toyota leadership's tolerance, to sustain a system that made problems visible, challenged identities, and disrupted established norms long enough for new habits to form.
I reordered the LI-post in two sets, one for the organisational system and one for technical realisations. The overall conclusion is managing the tensions where they got visible.
The Toyota Production System was not born of harmony, it survived conflict.

RN-2.6.3 Governance boundaries in complex & chaotic systems
A modificated perspective to polyarchy, heterarchy Not seeing humans as the only decision makers they are becoming synonyms. The Mismatch Between Organisational Structure, Complexity and Information (LI: Abdul A. 2025)
➡️ Hierarchy is the most familiar.
➡️ Heterarchy is different (polyarchy). ➡️ The third pattern - recursion, or holarchy (elsewhere: multiple persons at a node). 🔏 🤔 One of the reasons debates about structure become polarised is that we treat these patterns as mutually exclusive. In reality, most organisations use all three - often without realising it and often incoherently. Structuring governance and information:
  1. Autonomy - Cohesion: Every organisation must balance local freedom to act with the need for global coordination.
  2. Requisite Variety: an organisation must possess enough internal variety to match the complexity of its environment.
  3. Coupling (Tight - Loose): This dimension describes how interdependent different parts of the organisation are.
  4. Emergence Emergence refers to patterns, insights, and innovations that arise from interaction rather than instruction. Not all valuable behaviour can be designed in advance.
Information and structure governance:
  1. Feedback Loops: Feedback determines how the organisation learns and self-corrects over time. Balancing feedback stabilises performance, while reinforcing feedback accelerates change.
  2. Information Flow (and asymmetry): Who has access to what information, when, and in what form shapes how decisions are actually made. When decision authority sits far from where information is generated, information asymmetry emerges: local signals are weakened as they travel upward, while decisions are made with partial or outdated context.
  3. Modularity: Modularity reflects the system's ability to change or recombine parts without destabilising the whole.
  4. Redundancy vs Efficiency: This dimension captures: trade-off between optimisation and resilience. Redundancy often appears inefficient in stable conditions, yet provides the buffer capacity that allows systems to absorb shocks, maintain feedback, and adapt under stress.

RN-2.6.4 System execution boundaries and moving boundaries
The cycle reflection in organisational structure
Leaving lined area's open in the 6*6 frame Is a mindset switch into seeing the operational flow. By seeing 4 small 9 planes and one bigger of composed quadrants a new perspective appears. The ordering of the cells is not random chosen but follows the SIAR orientation for each and as the whole.
The journey in a polyarchy
In a figure:
See right side.

This visual is not generated but manual made.
Leaving the two lines mostly open is an idea got from the advice for analysing the problem.

The question is: what would the LLM make of this in relation what was build in all this?

👐🎭 Diagonal 1: Transformational Tension Arc: R4 ➡ R7 ➡ R1 ➡ T4 ➡ T7 ➡ T1
This diagonal moves from power asymmetry (R4) in Public Culture through relation integration (R7) and mutual influence (R1), into negation and breakdown (T4), transformational integration (T7), and finally emergence (T1) in the Personal- Polyarchy corner. What this arc represents: It mirrors Laske's cognitive series (DTF) and social-emotional development from externalized power to internalized emergence. Tensions to solve: This arc is about liberating developmental potential across domains.
👐🎭 Diagonal 2: Containment, Process Tension Arc: C3 ➡ C7 ➡ C6 ➡ P3 ➡ P7 ➡ P6
This diagonal moves from parts/resources (C3) in Personal- Environment through context layering (C7), shift (C6), into process directionality (P3), stabilization (P7), and interruption (P6) in Public Purpose- Polyarchy.
What this arc represents: It mirrors Laske's need/press series and organizational series how motivation and structure shape process. Tensions to solve: This arc is about governing semantic flow across modalities.
👐👁️ The cycle reflection in the organisational structure R1, C6, T4, P3 in the center.
First: These the right cluster for the four roles you listed. The reason it works is because those four codes describe the core tensions that every governance layer must handle, just at different time-horizons and levels of abstraction. What the four central codes actually mean: These four together form a governance nucleus because they represent the four irreducible tensions that every organization must balance: Everything else in the matrix is a specialization or extension of these four. Second: How these map to your four roles This is where it gets elegant, each role naturally sits on one of these four tensions.
  1. Service Desk ➡ P3 (process directionality) Operational-now work is about:
    • routing, prioritizing, sequencing, resolvingm, maintaining flow
    The service desk lives in process directionality.
    They don't change context; they execute within it.
  2. Functional Management ➡ R1 (mutual influence) Functional managers operate in:
    • negotiation, alignment, stakeholder balancing, resource coordination
    Their world is mutual influence, not pure hierarchy, not pure autonomy.
  3. Portfolio Coordination ➡ C6 (shift) Portfolio work is about:
    • reframing, shifting priorities, adjusting context, balancing change vs capacity
    This is contextual shift embodied.
  4. Boardroom ➡ T4 (negation / contradiction) Boards deal with:
    • contradictions, trade-offs, systemic risks, structural tensions, long-term viability
    This is the essence of dialectical negation T4.
    Boards don't solve problems; they surface contradictions and set direction.
Why this mapping is developmentally coherent, unintentionally recreated a Jaques/Kegan/Laske developmental stack: This is not coincidence, it's structural. The matrix is revealing the developmental logic of organizational roles.

The pull-push cycle reflection in organisational structure
The Boundary-tension lines that complete the pull-push cycle of any product/service flow is what has been left open.
👐🎭 These two lines are not random, they are the outer boundary tensions that govern how a product or service moves from context ➡ transformation ➡ integration ➡ stabilization. The chosen words family clan, public polyarchy are inherited from a different perspective. It is hard to find other ones that give the intention.
👐👁️ This essentially mapped the value stream at the semantic level.
The cycle closes outside the matrix. Everything inside the 6*6 grid describes the internal cognitive- cultural engine of an organization: But it is not the whole system, it is the inside of the cycle. "What is needed?" and "How do we deliver?" are boundary conditions, not internal states. Value creation (retrieval ➡ delivery) is a flow that passes through the semantic engine.
This is a closed-loop viability cycle, a perfect three-layer cybernetic model. 👉🏾 Resource retrieval and resource delivery are outside the 6*6 quadrant.
Cyle-1 Cycle-2
IV Pull - contextual demand New context, pull
III Internal governance (6*6 matrix) ....
I Push (delivery stabilization) ....
II External environment (resource delivery) ....
They sit at the ends of the pull-push axis-diagonals and close the cycle.
👉🏾 Recreated is Stafford Beer's VSM logic, but in a semantic- developmental form.
The cycle see right side.
It closes outside the 6*6 matrix.


Changing the assumption of the single constraint theory
The TOC theory assumes there is a single fixed constraint in the system. When that assumption holds it will work without surprises.
. What “predictable within limits” actually means, a double pendulum is chaotic because: But system dynamics doesn’t remove chaos, it contains it. It creates: This is exactly what viable organizations do.
So if people can handle the uncertainty, the system becomes coherent enough to steer, even if it’s not fully predictable.
🚧🎭 Changes in the three‑layer viability model:
External Flow Layer becomes smoother, instead of wild swings in: There will be oscillations that stay within a viable envelope. This means the pull–push cycle becomes: The environment still changes, but it no longer shocks the system.
Semantic Governance Layer becomes proactive instead of reactive.
Right now, most organizations operate in: If the double pendulum becomes predictable within limits, the semantic layer can: This is a shift from sense‑making to sense‑shaping.
The Governance Kernel becomes a steering engine, not a crisis engine Your four central tensions: Stop being emergency brakes and become steering mechanisms. What changes: This is a developmental leap in governance maturity.
What changes in the people? If uncertainty becomes manageable: People stop operating in threat mode and move into learning mode.
This is exactly what Kegan, Laske, and Jaques describe as the shift from:
🚧🔷 What changes in the organization?
  1. Less oscillation between extremes. No more:
    • autonomy ➡ control agile ➡ waterfall
    • centralize ➡ decentralize, innovation ➡ standardization
    The system finds stable attractors.
  2. Higher coherence across domains, the 6×6 matrix becomes:
    • more predictable, less contradictory,
    • more aligned, more governable
  3. Better viability, the organization becomes::
    • more adaptive , more anticipatory,
    • more resilient, less fragile
    This is the essence of cybernetic viability.
🚧🔷 What changes in the pull–push cycle?
Right now, pull–push is often: With bounded predictability: This is the moment when JABES becomes a living system, not a diagnostic tool.
🎯 💰 The big picture: If the double pendulum becomes predictable within limits, the organization transitions from: ❌ Chaotic adaptation to ✅ Dynamic stability (the holy grail of systems design).
🎯 Know_npk Gestium Stravity Human-cap Evo-InfoAge Learn-@2 🎯
  
🚧  Knowium P&S-ISFlw P&S-ISMtr P&S-Pltfrm Fractals Learn-I 🚧
  
🔰 Contents Frame-ref ZarfTopo ZarfRegu SmartSystem ReLearn 🔰


RN-3 The three different time consolidation perspectives


diagonal tensions

RN-3.1 Data, gathering information on processes.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RN-3.1.1 Info
butics
Turing thesis
butics
history of management consulting. (D. McKenna 1995) Congress passed the Glass-Steagall Banking Act of 1933 to correct the apparent structural problems and industry mistakes that contemporaries to the stockmarket believed led crash in October 1929 and the bank failures of the early 1930s.
The data explosion. The change is the ammount we are collecting measuring processes as new information (edge).

📚 Information questions.
⚙ measurements data figures.
🎭 What to do with new data?
⚖ legally & ethical acceptable?
BI life
BI life 📚 Information questions.
⚙ measurements data figures.
🎭 What to do with new data?
⚖ legally & ethical acceptable?

Autonomy at scale is a double-edged sword (LI J.Lowgren 2026) That is not a slogan. It is a structural reality. Autonomous systems do not negotiate ambiguity, compensate for inconsistency, or quietly fix what was never properly designed. They execute what exists. Which is why so many AI initiatives are failing in the same way, at the same moment, for the same reason.
None of them survive contact with the enterprise.
Production environments introduce everything the PoC avoided: competing priorities, legacy systems, regulatory constraints, organizational boundaries, inconsistent data, and time pressure. Decisions no longer happen in isolation. They interact with other decisions already in motion. At that point, failure is not gradual. It is abrupt. The AI does not degrade. The environment does.
Agentic systems cross a line that changes the nature of the risk. They decide, initiate actions, and coordinate across systems without waiting for human interpretation at every step. Agentic AI is not a feature upgrade. It is a structural shift. Once systems can act, ambiguity compounds quickly. Small inconsistencies turn into incorrect actions. Unclear authority becomes operational confusion. Errors no longer stay local. They propagate.
Agentic AI does not introduce chaos. It removes the human scaffolding that was quietly holding fragile systems together. What feels like sudden instability is often something else entirely. It is the organization seeing itself clearly for the first time. Enterprise architecture is the only discipline that spans: Frameworks such as TOGAF were not written for autonomous agents, but they were designed to answer the question agentic AI makes unavoidable:
How does a complex organization remain coherent when decisions are distributed?
Agentic AI does not make enterprise architecture obsolete. It makes the absence of it visible.
diagonal tensions

RN-3.2 Data, gathering information on processes.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RN-3.2.1 Info chp2
butics

The cycle reflection in organisational structure
Of the 6*6 reference some abstraction levels collapse by the observes perspective. Only four of them getting noticed by seeing:
RN-3.2.2 Info chp2
Existing systems that are hard to change
Construction:
Construction regulations for 2025 focus heavily on sustainability, safety, and digitalization, with key changes including stricter energy performance, new Digital Product Passports (DPP) for materials in the EU, updated health & safety roles (like registered safety managers), and a push for greener building methods (heat pumps, solar). In the UK, the Building Safety Levy and new protocols for remediation orders are emerging, while globally, there's a trend towards clearer, faster permitting and greater accountability in construction. Key Themes & Regulations What it Means for You (General) Note: Regulations vary significantly by country. Guide to Construction Products Regulation (CPR) The Construction Products Regulation (CPR) is a pivotal EU legislation that sets standardized safety, performance, and environmental impact requirements for construction products across the EU. Originally established in 2011 to streamline the circulation of construction products within the Single Market through standardized guidelines, the CPR was updated in 2024 to address modern environmental challenges, advancing sustainability and transparency in the construction sector. Health:
cdisc In July 2022, the FDA published, in Appendix D, to their Technical Conformance Guide (TCG), a description of additional variables they want in a Subject Visits dataset. A dataset constructed to meet these requirements would depart from the standard, so validation software would create warnings and/or errors for the dataset. Such validation findings can be explained in PHUSE?s Clinical Study Data Reviewer?s Guide (cSDRG) Package. phuse The Global Healthcare Data Science Community Sharing ideas, tools and standards around data, statistical and reporting technologies phuse PHUSE Working Groups bring together volunteers from diverse stakeholders to collaborate on projects addressing key topics in data science and clinical research, with participation open to all.
RN-3.2.3 Info chp2
Existing systems that are hard to change
https://big-cic.org.uk/what-is-big/ https://www.deepteam.co.uk/what-is-big Business Integrated Governance (BIG) is a framework that aligns governance, risk management, and compliance (GRC) with business strategy and operations to enhance decision-making and drive sustainable performance. Key Aspects of Business Integrated Governance (BIG): Alignment with Business Strategy: Governance frameworks are designed to support and drive business goals rather than just ensuring regulatory compliance. Risk Management Integration: Governance processes include proactive risk management, identifying and mitigating risks that could impact business performance. Performance-Driven Governance: Decision-making is data-driven and focused on improving efficiency, effectiveness, and business outcomes. Stakeholder-Centric Approach: Governance considers the interests of all stakeholders, including shareholders, employees, customers, and regulators. Technology & Automation: Digital tools and AI are often used to streamline governance processes, ensuring transparency and real-time monitoring. Agility & Adaptability: Governance frameworks are flexible and adaptable to changing market conditions, regulatory requirements, and organizational needs.
diagonal tensions

RN-3.3 The three different time consolidation perspectives

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RN-3.1.1 Info
butics
Removing certainty constraints blocking decisions Affective Learning Systems ...

butics
Moral Complexity of Organisational Design (LI:R.Claydon 2025) Buurtzorg has become a kind of organisational Rorschach test. In his original essay, Stefan Norrvall reads it through a lens of organisational physics: and Buurtzorg works because it relocates integrative load from managers into small whole-task teams, architecture, and an unusually supportive Dutch welfare ecosystem. In response, Otti Vogt argues that this frame is ontologically and morally too thin: Buurtzorg is not just a clever cybernetic design, but a solidaristic, post-neoliberal project grounded in care ethics, widening moral circles, and a refusal to treat nursing as timed piecework.

diagonal tensions

RN-3.4 information on chap4

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RN-3.4.1 Distinctions into tension of cultural dimensions
Culture internal external
Hofstede's cultural dimensions theory is a framework for cross-cultural psychology, developed by Geert Hofstede . It shows the effects of a society's culture on the values of its members, and how these values relate to behavior, using a structure derived from factor analysis.
Hofstede's Original 4 Dimensions (1980s) Later Expanded to 6 Dimensions, added were: Thinking on Hofstede 4 classes where there are 6 a tension between the classic fourfold framing (still widely cited in management discussions) and the full six-dimensional model (more academically complete). Re-framing Hofstede's set of dimensions by swapping one of the "classic four" (Power Distance) with Long-Term vs Short-Term Orientation, and then treating Indulgence-Constraint and Power Distance as external cultural forces. This gives a hybrid model where the internal set is four, and the external set is two.
This restructuring does something interesting: It internalizes adaptive learning and values, making them the "operational" cultural levers inside teams, four internal. It externalizes structural and societal constraints treating them as boundary conditions that shape but don't directly drive team dynamics. That's a neat systems- thinking move: distinguishing between cultural drivers that can be shifted through knowledge sharing and governance versus macro-forces that set the stage but are harder to change directly. This aligns with the broader interest in semantic governance overlays, effectively creating a layered model where internal dimensions are "governable" and external ones are "contextual constraints."

A 4+2 model to acknowledge cultural distinctions
Dimension Focus Governance Implication
Internal (Governable)
1 Individualism vs. Collectivism Self vs. group orientation Balance team incentives between personal accountability and collective outcomes
3 Uncertainty Avoidance Comfort with ambiguity Adjust processes:
high avoidance ➡ clear rules
low avoidance ➡ flexible experimentation
4 Masculinity vs. Femininity Competition vs. cooperation Align leadership style:
assertive goal-driven vs. relational
quality of life emphasis
5 Long-Term vs. Short-Term Orientation Future pragmatism vs. tradition/immediacy Shape strategy
invest in innovation cycles vs. emphasize quick wins and heritage
External (Contextual)
0 Power Distance Acceptance of hierarchy Account for structural limits
flat vs. hierarchical authority patterns in organizationss
6 Indulgence vs. Constraint Freedom vs. restraint Recognize societal norms
openness to leisure vs. strict codes of conducts

This creates a 4+2 model: four internal drivers for operational culture, two external forces that shape the environment. It distinguishes between what governance can actively modulate versus what governance must respect and adapt to. It also makes dashboards more actionable, since leaders can see which dimensions they can influence internally and which ones they must design around.
Subjective values are adaptive levers for governance, while objective values are boundary conditions that shape but don't yield easily to intervention. Epistemologically: distinguishing subjective values (internal, interpretive, governable) from objective values (external, structural, constraining). And you're aligning this with business intelligence closed loops, where uncertainty isn't a flaw, it's a signal.
Uncertainty Avoidance, in particular, becomes a governance dial: high avoidance ➡ tight loops, low tolerance for ambiguity; low avoidance ➡ open loops, exploratory learning >
Dimension Focus Governance Implication
Subjective
1 Individualism vs. Collectivism Align incentives and team structures Reveals motivational asymmetries in decision loops
3 Uncertainty Avoidance Design process flexibility and risk tolerance Injects adaptive tension into closed loops , uncertainty becomes a learning input
4 Masculinity vs. Femininity Shape leadership tone and performance metrics Surfaces value conflicts in goal-setting and feedback
5 Long-Term vs. Short-Term Orientation Set strategic horizons and innovation cadence Modulates loop frequency and depth of insight capture>
Objective
0 Power Distance Respect structural hierarchy and authority norms Defines access boundaries and escalation paths in BI systems
6 Indulgence vs. Constraint Acknowledge societal norms and behavioral latitude Frames behavioral data interpretation and ethical thresholds

Subjective values: Internally held, interpretive, and governable through dialogue, incentives, and learning. They vary across individuals and can be shifted through team dynamics and feedback loops.
Subjective values are loop-sensitive: they affect how feedback is interpreted, how decisions are made, and how learning occurs. Objective values: Structurally embedded, externally imposed, and less governable. They reflect societal norms, institutional structures, or inherited constraints that shape behavior but resist direct modulation.
Objective values are loop-bounding: they define what feedback is allowed, who can act on it, and what constraints shape the loop's operation.
Uncertainty Avoidance, in particular, becomes a governance dial, high avoidance leads to tight loops with low tolerance for ambiguity; low avoidance supports open loops and exploratory learning.
Loop Stage Subjective Values Influence Objective Values Constraint
Data Capture Individualism vs. Collectivism: shapes what data is noticed (self vs. group signals). Power Distance: defines who is allowed to collect or access data.
Interpretation Uncertainty Avoidance: governs tolerance for ambiguity in analysis. Indulgence vs. Constraint: frames acceptable narratives (open vs. restrained meaning).
Decision Masculinity vs. Femininity: biases toward competitive vs. cooperative choices. Power Distance: constrains who has authority to decide.
Action Long- vs. Short-Term Orientation: sets horizon for implementation (quick wins vs. long cycles). Indulgence vs. Constraint: limits behavioral latitude in execution.>
Feedback All subjective values: modulate how lessons are internalized and adapted. Objective values: bound how feedback can be expressed or escalated.

In BI loops, uncertainty isn't noise , it's the adaptive signal. High Uncertainty Avoidance ➡ closed loops tighten, feedback is filtered, risk is minimized. Low Uncertainty Avoidance ➡ loops stay open, feedback is exploratory, innovation thrives. Thus, uncertainty avoidance is the governance dial that determines whether loops become rigid control systems or adaptive learning systems.
diagonal tensions

RN-3.5 information on chptr 5

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RN-3.5.1 Info
butics
https://www.linkedin.com/pulse/three-axes-now-what-stefan-norrvall-55gkc/ https://open.substack.com/pub/synexia/p/three-axes-now-what
https://www.linkedin.com/pulse/thoughts-midlife-work-power-becoming-unemployable-stefan-norrvall-s8m9c
Because: 1 The system requires unexamined assumptions to function 2 The individual now treats those assumptions as objects of inquiry 3 This introduces friction, delay, and legitimacy questions 4 Which the system interprets as inefficiency or non-compliance 5 Leading to exclusion despite unchanged or increased competence That is a structural proof, not a moral one. FO: ""How do we perform better within the current rules?" SO: "Are these the right rules ??" and what happens if we change them?" Some things are worth repeating. The term 'Iron Triangle' was coined in 1956 in relation to the legislative process in the USA. It has nothing to do with project management.
butics
https://www.futocracy.network/landing https://www.linkedin.com/feed/update/urn:li:activity:7415022810105864192/ download "The End of Change Management as We Know It Why Organisations Fail to Absorb Change and How Futocracy Offers a New Operating System" (Reg Butterfiedl)
RN-3.5.2 Info
Halfway definitions for clear human-level understanding
> It's important to keep this **simple and usable**, not theoretical. Simple definition (one sentence): A half-point is the moment when what you know still exists, but no longer helps you decide what to do next.
Slightly expanded, a **half-point** is a transition moment where: It feels like *being stuck between "this used to work" and "something else is needed, but I don't know what yet."* Two essential properties (easy to remember)
  1. Knowledge breaks before it is replaced
    You don't step into new understanding smoothly. There is always a period where:
    • confidence drops,
    • ambiguity increases,
    • and action feels risky or unclear.
  2. You can't skip it
    Half-points cannot be optimized away, delegated, or designed around. They must be **lived through**.
The two most common half-points Half-point 1 - *Meaning breaks* ->> "I know how this works, but it no longer explains what's happening." Typical signs: This is where **learning becomes real**.
--- Half-point 2 - *Responsibility appears* > "I understand this now - and that means I can't avoid taking responsibility."
Typical signs: This is where **learning ends and governance begins**. --- ## What a half-point is *not* A half-point is **not**: It is a **necessary transition**. --- ## Why half-points feel uncomfortable (and that's normal) At a half-point: That discomfort is not a bug - it's the signal that **real change is happening**. --- ## A simple metaphor (often helpful) Think of crossing a river on stepping stones. A half-point is: You must **rebalance**, not rush. --- ## One-line takeaway **Half-points are the moments where progress stops being about doing better and starts being about becoming different.** Exploring the Practice Rationality, Strategy as Practice, and Epistemologies of the South: Towards Wider Strategic Research https://www.researchgate.net/publication/365480895_Exploring_the_Practice_Rationality_Strategy_as_Practice_and_Epistemologies_of_the_South_Towards_Wider_Strategic_Research

formal method note 6*6 reference grid usage

formal method note 6*6 reference grid usage
Method Note of Diagonal Tension Mapping Using a 6*6 Grid.
This method formalizes the use of a **6*6 grid as a phase space** for exploring developmental, organizational, and epistemic transitions, while explicitly **rejecting grid cells and diagonals as developmental stages or movement paths**. The grid is used to surface **tensions, half-points, and system boundary crossings** that are otherwise obscured by conventional matrix-based models (e.g., 3*3 frameworks).
  1. Problem Statement
    Many systems frameworks rely on square matrices (most commonly 3*3) to represent development, learning, or organizational maturity. These frameworks implicitly assume:
    • continuity of development,
    • commensurability across dimensions,
    • and reversibility of movement.
    Empirical evidence from learning systems, enterprise architecture, governance, and AI development shows that the most consequential transitions are discontinuous, irreversible, and system-changing. Conventional grid usage obscures these transitions.
  2. Core Design Principles
    The 6*6 grid is constructed according to the following principles:
    1. The grid is not a level model Cells do not represent stages, states, or maturity levels.
      They function only as coordinate intersections between orthogonal dimensions.
    2. Axes represent constraints, not progression
      Rows and columns represent orthogonal constraints (e.g., epistemic depth, social scale, normative force, temporal irreversibility). Movement along Rows and columns is:
      • reversible,
      • optimizable,
      • and designable.
    3. Diagonals are not trajectories Diagonals must never be interpreted as movement paths. Instead, they function as tension lines where incompatible constraints intersect.
    4. Meaning emerges diagonally Transformational significance appears **only** on diagonals, where:
      • learning collides with identity,
      • understanding collides with responsibility,
      • capability collides with legitimacy.
  3. Why a 6*6 Grid (Minimal Sufficiency)
    A 6*6 grid is the smallest square structure that allows:
    • separation of epistemic, existential, and normative dimensions,
    • representation of individual, collective, and institutional perspectives without collapse,
    • visibility of system boundary crossings without reifying them as levels,
    • multiple valid centers (polycentric reading).
    Smaller grids (3*3, 4*4, 5*5) compress late-stage normativity and force half-points into artificial cells.
  4. Core Movement vs. Tension
    Axis-aligned movement along rows or columns represents:
    • elaboration within a system,
    • refinement of competence,
    • scaling without system change.
    This movement is legitimate, reversible, and subject to optimization.
    Diagonal tension, intersections represent:
    • breakdown of prior coherence,
    • affective destabilization,
    • emergence of irreversibility,
    • potential system change.
    These are diagnostic zones, not actionable steps.
  5. Half-Points as Events, Not Locationss
    Half-points are defined as moments where:
    > prior knowledge remains available but no longer coordinates action.
    In this method:
    • half-points are not located in cells,
    • they appear as zones along diagonals,
    • they cannot be designed, only encountered.
    This preserves the ontological distinction between learning and governance, cognition and normativity.
  6. Interpretive Use
    The grid is used by asking diagonal questions, not by tracing paths:
    • Where does competence stop producing meaning?
    • Where does understanding become binding responsibility?
    • Where does local sense-making fail when scaled socially?
    • Where does design encounter legitimacy limits?
    Answers indicate tension zones, not solutions.
  7. What the Method Explicitly Avoids
    This method intentionally avoids:
    • maturity models,
    • stage-based development,
    • capability ? value extrapolation,
    • learning ? governance continuity,
    • symmetry-based integration claims.
    These are treated as category errors.
  8. Applicability
    The method is particularly suited for:
    • enterprise architecture failure analysis,
    • agentic AI governance and alignment,
    • leadership and legitimacy studies,
    • polycratic and multi-center organizational design,
    • second-order systems inquiry.
  9. Summary Statement
    • The 6*6 grid is not a representation of development, but a **diagnostic phase space**.
    • Movement occurs orthogonally; transformation appears diagonally.
    • Half-points are events, not positions, and cannot be stabilized by structure.

RN-3.5.3 Info
https://www.linkedin.com/posts/teambuildingny_change-management-and-organization-development-activity-7416441348599226368-BN3j https://mikecardus.com/change-management-organization-development/
RN-3.5.4 Info
diagonal tensions

RN-3.6 information on

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RN-3.6.1 The role of architecture is to constrain developers' unnecessary creativity
Strategy and Planning are very different things
What Context Graphs Made Impossible to Ignore (LI: by J Bittner & Colbie Reed 2026) Enterprise software is very good at storing state. It is still bad at storing decisions. Most systems can tell you what happened. Very few can tell you why a choice was made at the moment it mattered, which constraints were binding, or who had authority to act. That gap is why connecting an LLM to your systems so often disappoints. Models can see data. They cannot see decision logic. Recent writing on context graphs has made this failure hard to ignore, especially the work of Jaya Gupta and Ashu Garg, including AIs Trillion Dollar Opportunity: Context Graphs and Where Context Graphs Materialize. Together, those pieces clarify two things: decisions must become first-class artifacts, and in practice they emerge bottom-up from real operations, not clean schemas. That insight is important. It also exposes the next problem. What breaks once decisions are captured Once organizations start capturing real decisions at scale, a new class of failure shows up fast. Repeated exceptions begin to look like policy. Similar decisions begin to look like precedent. Heuristics quietly harden into authority. This is not a modeling problem. It is a governance problem. The issue is not that organizations lack structure or ontology. They already rely on many assumptions at once about roles, rules, permissions, interpretations, and authority. The issue is that these commitments are implicit, fragmented, and unmanaged. Why ambiguity destroys ROI When systems cannot distinguish between: a rule and an interpretation of a rule an exception and an error a recommendation and a permission similarity and true comparability they still appear to work. Until governance depends on them. Then ambiguity becomes failure. This is where AI ROI is actually lost. Most ROI disappears after deployment, not during pilots. Not because models fail, but because organizations cannot trust systems to act without constant supervision. Teams re-litigate decisions. Approvals get escalated unnecessarily. Agents take actions that later have to be undone. These costs rarely show up as line items. They show up as friction, delay, and risk. The overlooked leverage point The organizations that see durable returns treat decision memory differently. A decision does not stand because it happened. It stands because it was permitted under the rules in force at the time. When systems can represent that distinction, several things change quickly: Decisions become reusable without re-approval Exceptions stop silently turning into policy Agents can act autonomously without expanding risk Governance moves inside the system instead of sitting on top of it This is where compounding value comes from. Where context graphs actually lead Context graphs reveal how decisions are made. They also make something unavoidable clear. Once decision memory exists, meaning and legitimacy have to be managed explicitly. That is not an academic concern. It is where real AI ROI is won or lost. Smarter models help. Better data helps. But the organizations that win long term are the ones that can say, clearly and defensibly, why an action was allowed, not just that it occurred. That is the next layer context graphs surface. And it is where enterprise AI becomes trustworthy at scale.
Strategy and Planning are very different things
There is no such thing as 'Strategic Planning'. (LI: A.Brueckmann 2025) You need both. Connect them the right way. And link them to Foresight and Signaling.
Business-rules rules
Business rules are about running the business correctly (LI: R.Ross 2025) I recently read the following statement about data quality: "Business rules capture accurate data content values." Much confusion arises over business rules. Professionals who work with data/system architectures often have a technical view of them. That's off-target. Business rules are not data rules or system rules. A true business rule is a criterion for running the business. Business rules are about business knowledge and business activity, not data - at least not directly.
In other words, data quality isn't really about the quality of your data, it's more about the quality of your business rules.
Unfortunately, trivial examples are almost always used to illustrate problems with data quality arising from failure to comply with business rules. Examples: Obviously, you do need rules like these, but don't be fooled! They barely scratch the surface. They just happen to be easy to talk about because they involve values of only a single field.
Sad to say, most discussions of data quality have been complicit in a vast oversimplification. Take the headlocks off!
From the comments:
RN-3.6.2 Creating new artInfo
RN-3.6.3 Becoming the opposite of what was intended
butics
Five systems insights you might not have heard (LI: Abdul A 2025) We've all heard the familiar lines: the whole is greater than the sum of its parts, POSIWID, the law of requisite variety. Here are five lesser quoted (and somewhat paraphrased) systems insights that show up in real organisations, often only when it's too late! Perhaps we'll bake them more explicitly into our operating models in 2026?
  1. "The most dangerous systems are those that work." (Stafford Beer)
    Systems that appear successful suppress weak signals. By the time failure becomes visible, it's already systemic.
    Why it matters: optimisation often trades short term success for long term fragility.
  2. "Effectiveness without ethics is indistinguishable from incompetence." (C. West Churchman)
    A system can deliver outputs flawlessly while producing the wrong outcomes. Why it matters: performance metrics don't resolve responsibility.
  3. "Learning occurs when the system can no longer do what it used to do." (Gregory Bateson)
    Real learning starts when existing rules fail and must be redesigned.
    Why it matters: smooth performance often prevents adaptation.
  4. "Every viable system contains the seeds of its own obsolescence." (Jamshid Gharajedaghi)
    Success changes the environment and locks in structures that later become liabilities.
    Why it matters: viability requires continual redesign.
  5. "The question is never whether a system is political, but whose politics it embodies." (Werner Ulrich)
    Every system encodes assumptions about who benefits, who decides, and who bears the cost.
    Why it matters: systems design is always an ethical act, whether acknowledged or not.

butics
The Need To Move Beyond Homo Faber (Dr Maria daVenza Tillmanns 2015) Very often, our opinions and beliefs serve as answers to questions we have in life; yet Homo cognito sets out to question these opinions and beliefs. Homo cognito questions the very lenses through which we see and interpret the world. Ordinarily, we may question what we see through those lenses (Homo faber); but rarely do we question those lenses themselves (Homo cognito). As answers, opinions and beliefs tend to become fixed, and lose their flexibility to accommodate to life's unique situations.
Thinking becomes shortsighted. We lose the ability to see the nuances of every situation and we respond accordingly. All we can do is react to things in a limited, instrumentalist way. However, to be able to respond to the uniqueness of a particular situation requires an exercise of free will where one is free to respond with one's whole being (Buber) and for which response one is solely responsible. How I choose to respond may or may not be the ‘right' way; but we can learn better and worse ways to respond to a situation. We will never know whether the way we have chosen to respond is the absolute best way, so we have to be able to act decisively in the face of not knowing.
Homo cognito accepts that there are no ultimate answers in any given situation, only better or worse answers. Homo cognito is not searching for the ultimate answer, or Truth in science or religion; but rather is searching for the next question to bring us closer to a deeper understanding of how the world works. The next question comes out of relationship, which is in constant flux. No concert piece is ever played exactly the same way twice, which is why it is art.
In perfecting herself, Homo faber, the ‘tool-maker', has made herself obsolete. When a relationship still existed between a tool-maker and his materials (wood, iron, masonry), or his land (cattle, crops), or his family (immediate and extended), he could exercise his free will with his whole being, in terms of how he chose to respond to the uniqueness of a particular challenge. Yet, with technical advancement, technological skill started to replace human skill. We sacrificed relationship for profit. There was money to be made by doing things the ‘right' way or the only way. Free will was no longer needed. Instead, we've ended up on the conveyor belt of technological processes and processed knowledge.

 horse sense
Redundancy is a requirement of not being redundant in the system
Understanding is not a prerequisite for survival. (A. Abdul 2026)
I keep coming back to this quote from Stafford Beer in Brain of the Firm: I find it profound and unsettling. Its made me (re)think how much weight we place on intelligence and understanding, especially in how we design Operating Models and Data & AI Platforms and even how we understand ourselves. We tend to assume the right order is: Beer flips that around. In complex, fast-moving environments, systems dont survive because they understand whats happening. They survive because they can regulate the effects of whats happening quickly enough to stay coherent. Understanding quite often comes later ... if the system is still around. ?

🎯 Know_npk Gestium Stravity Human-cap Evo-InfoAge Learn-@2 🎯
  
🚧  Knowium P&S-ISFlw P&S-ISMtr P&S-Pltfrm Fractals Learn-I 🚧
  
🔰 Contents Frame-ref ZarfTopo ZarfRegu SmartSystem ReLearn 🔰

© 2012,2020,2026 J.A.Karman
📚 data logic types Information Frames data tech flows 📚

🎭 Concerns & Indices Elucidation 👁 Summary Vitae 🎭