Design Data - Information flow
RH-1 Design Data - Information flow .
RH-1.1 Contents
⚙ RH-1.1.1 Looking forward - paths by seeing directions
A reference frame in mediation innovation
When the image link fails,
🔰 click
here.
There is a revert to main topic in a shifting frame.
Contexts:
◎ C-Shape mediation communication
↖ C-Serve technology, models
↗ I-C6isr organisational control
↙ infotypes
↘ techflows
Fractal focus in mediation innovation
The cosmos is full of systems and we are not good in understanding what is going on.
In a ever more complex and fast changing world we are searching for more certainties and predictabilities were we would better off in understanding the choices in uncertainties and unpredictability's.
Combining:
- Systems Thinking, decisions, ViSM (Viable Systems Model) good regulator
- Lean as the instantiation of identification systems
- The Zachman 6*6 reference frame principles
- Information processing, the third wave
- Value Stream (VaSM) Pull-Push cycle
- Improvement cycles : PDCA DMAIC SIAR OODA
- Risks and uncertainties for decisions in the now near and far future, VUCA BANI
The additional challenge with all complexities is that this is full of dualities - dichotomies.
⚙ RH-1.1.2 Local content
⚖ RH-1.1.3 Guide reading this page
The quest for methodlogies and practices
This page is about a mindset framework for undertanding and managing complex systems.
The type of complex systems that is focussed on are the ones were humans are part of the systems and build the systems they are part of.
When a holistic approach for organisational missions and organisational improvements is wanted, starting at the technology pillar is what is commonly done.
Knowing what is going on on the shop-floor (Gemba).
Working into an approach for optimized systems, there is a gap in knowledge and tools.
👁 💡 The proposal to solve those gaps is
"Jabes". It is About:
- ⚙ Document & communicate Knowledge (resources, capabilities, portfolio, opportunities).
- 📚 Defining boundaries in context in knowledge domains of disciplines.
- 🎭 In a knowledge domain of as discipline a standardize metadata structure.
- ⚖ Maturity evaluation of quality Document & communicate Knowledge.
Seeing "Jabes" as a system supporting systems the question is what system is driving Jabes?
The system driving Jabes must have similarities to the one that is driving it.
👁 💡
ZARF (Zachman-Augmented Reference Frame) is a streamlined upgrade to the classic Zachman matrix.
It turns a static grid into a practical, multidimensional map that guides choices, enforces clear boundaries, and adds a sense of time, so teams move methodically from idea to reality
Business intelligence - Artificial intelligence
The world of BI and Analytics is a challenging one.
It is not the long-used methodology of producing administrative reports.
A lot needs to get solved, it is about information for shaping change:
The issue is that is technology driven situation, where it should be:
- ⚙ Operational Lean processing, design thinking
- 📚 Doing the right things, organisation & public.
- 🎭 Help in underpinning decisions boardroom usage.
- ⚖ Being in control, being compliant in missions.
Dashboarding reporting for closed loops used in good-regulators are approaches in the attempts solving those in systems.
❗ ⚠ The fractalness in systems make it unclear who the stakeholder for some dashboarding really is.
The technology drive by the market is hiding the question for who and what in the why in variety and fractals of systems.
A recurring parabal for methodlogies and practices
An often used similarity is going to the life at and on ships.
The simplification is that is a clear boundary for the internal and external systems of the ship.
There are nice clear three vertical rows:
- The executives deciding over what to happen on the ship and the direction it should go.
- Space for the product - service whether it are passengers or cargo.
How this is manged needing dedicated staff.
- The engines for the structure (data centre) out of sight below visibility.
Dedicated staff operating informing and advising on decisions.
This can be set in a perspective of: Strategy, Tactical, Operational.
Another perspective could be: Far future, near future and the now.
For each of them there are however dualities and dichotomies.
There are nice clear three horizontal columns:
- The structure for the goal and purpose what the ship does (BPM)
- Managing the structure for the technology the engines (SDLC)
- Getting the information for informed decisisons (Analytics)
Interacting with the external systems in some controlled alignment:
- Information resources for getting better decisions (Data).
- Improving the knowledge by what is known (Meta).
e.g. de product - service handling in cargo and passengers
- Changing the knowledge in what is not already known (Math).
e.g. new product -service opportunities or a complete different ship.
In a figure,
see right side
The allegory for using a ship goes on into how to mange that.
Your business is a boat navigating the river of waste. (S.Angad, 2025)
What you see above water are symptoms.
What's hidden below are the real problems.
| what's visible | real problems |
| Machine downtime | Untrained workforce |
| Quality defects | Forecast inaccuracy |
| Long changeovers | Poor communication |
| Schedule delays | Outdated processes |
Most leaders focus on what's visible, but these are just rocks breaking the surface.
The real problems are underwater.
Here's what I've learned after years of helping manufacturers:
👉🏾 You can't steer around every rock.
👉🏾 You have to lower the water level.
When you reduce the waste in your system, problems that were hidden suddenly become visible.
That's uncomfortable. But it's necessary.
Most improvement efforts fail because they treat symptoms.
Real improvement lowers the water level.
| treat symptoms | Real improvement |
| Hire more inspectors for quality issues | Train people to prevent quality issues |
| Add buffer inventory for supply problems | Fix forecasting to reduce inventory needs |
| Work overtime for capacity constraints | Improve flow to eliminate capacity constraints |
| Buy faster machines for throughput gaps | Standardize work to reduce variation |
The goal isn't to avoid problems.
The goal is to see them clearly so you can solve them permanently.
Your biggest competitive advantage isn't having fewer problems.
It's solving problems faster than your competition.
Lower the water level. Expose the rocks. Remove them one by one.
⚒ RH-1.1.4 Progress
done and currently working on:
- 2025 week 50
- The old page completely consolidated, started emptied
The topics that are unique on this page
👉🏾 Rules Axioms for the Zachman augmented reference framework (ZARF).
- Based in the classic way of categorized 6 type of questions for thinking (one dimensional)
- Stepping over the 6*6 two-dimensional Zachman Idea
- Extends to a 3*3*4 three-dimensional approach
- Awareness of a 6*6*6 (..) multidimensional projection
👉🏾 Connecting ZARF to systems thinking in the analogy of:
- Anatomy,
- Physiology,
- Neurology,
- Sociology - Psychology.
👉🏾 Explaining the patterns that are repeating seen in this.
- Connecting components for the systems as a whole,
- There must be an effective good regulator for the system to be viable.
- Searching the relations for systems to their universe.
- Motiviations and distraction seen in repeating patterns.
👉🏾 use cases using the patterns for Zarf and by Zarf.
- More practical examples that help in applying Zarf
- Use cases are not fixed but can vary in time
- Adaption to uses cases when there are clearly recognised.
Highly related in the domain context for information processing are:
- C-Shape the abstracted approach for shaping, the related predecessor.
- r-c6isr command and control practical an abstracted approach, in what to shape.
- c-shape the practice follower of the predecessor.
RH-1.2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
⟲ RH-1.2.1 Info
butics
Anti-buzz hype simplified dashboard distinctions
Transformational
The anti-buzz hype data strategy, a strategy is not:
- a fancy story about data strategy,
- a long document full of definitions,
- an "inspiring" deck without actions, choices, priorities, or ownership.
A data strategy is, however, a set of explicit choices that:
- provides direction (what to do/not to do, why),
- is manageable (who does what, decides, how to measure),
- is executable (capacity, roadmap, preconditions).
Where do "theorists" typically get stuck?
The theory of data strategy is confused with strategy.
This mainly occurs at these intersections:
- Goals without prioritization ➡ "we want to be data-driven" (no choices)
- Governance without a mandate ➡ roles on paper, no one decides
- Roadmap without capacity ➡ list of actions, no feasibility
- Values without management measures ➡ ethics/GDPR mentioned, no controls/KPIs
- Data as a concept, not master data ➡ unclear what constitutes "truth"
What's needed at a minimum to make this a "strategy" (the smallest upgrade):
| | Goals | information capability |
| 1 | Goal & focus | 3-5 priority data domains + 10 "don'ts" |
| 2 | Starting position | 1-page baseline (maturity + top bottlenecks + risks) |
| 3 | Organizational model | decision path + ownership + portfolio board (who decides what) |
| 4 | Management measures | KPI set (quality/delivery/value/compliance) + rhythm + interventions |
| 5 | Master data | top 15 objects + source agreements + definitions + management process |
| 6 | Capacity | roles/FTEs/skills + budget bandwidth + 12-month roadmap |
| 7 | Transformational options | |
Anti-buzz hype simplified dashboard distinctions
Complexity is simplicity gone wrong.
No bottom-up approach, no raising awareness, no building support, no nonsense following over-the-top, no-nonsense catch-all terms.
"Simply" knowing how governance works is the best starting point.
A GOOD dashboard consists of at least six components.
| | Goals | information capability |
| 1 | The goal | |
| 2 | situation awareness | |
| 3 | Model of the system | |
| 4 | Options for influence | |
| 5 | Masterdata | |
| 6 | capacity & capabilities | |
| 7 | Transformational options | |
RH-1.3
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
⟲ RH-1.3.1 Info
butics
butics
Turing thesis
The data explosion. The change is the ammount we are collecting measuring processes as new information (edge).
📚 Information questions.
⚙ measurements data figures.
🎭 What to do with new data?
⚖ legally & ethical acceptable?

📚 Information questions.
⚙ measurements data figures.
🎭 What to do with new data?
⚖ legally & ethical acceptable?
Tuning performance basics.
Solving performance problems requires understanding of the operating system and hardware.
That architecture was set by von Neumann (see design-math).
A single CPU, limited Internal Memory and the external storage.
The time differences between those resources are in magnitudes (factor 100-1000).
Optimizing is balancing between choosing the best algorithm and the effort to achieve that algorithm.
That concept didn´t change. The advance in hardware made it affordable to ignore the knowledge of tuning.
The Free Lunch Is Over .
A Fundamental Turn Toward Concurrency in Software, By Herb Sutter.
If you haven´t done so already, now is the time to take a hard look at the design of your application, determine what operations are CPU-sensitive now or are likely to become so soon,
and identify how those places could benefit from concurrency. Now is also the time for you and your team to grok concurrent programming´s requirements, pitfalls, styles, and idioms.
Additional component, the connection from machine, multiple cpu´s - several banks internal memory, to multiple external storage boxes by a network.
Tuning cpu - internal memory.
Minimize resource usage:
- use data records processing in serial sequence. (blue)
- indexes bundled (yellow).
- Allocate correct size and correct number of buffers.
- Balance buffers between operating system (OS) and DBMS. A DBMS normally is optimal without OS buffering (DIO).
❗ The
"balance line" algorithm is the best.
A DBMS will do that when possible.
Network throughput.
Minimize delays, use parallelization:
- Stripe logical volumes (OS).
- Parallelize IO, transport lines.
- Optimize buffer transport size.
- Compress - decompress data at CPU can decrease elapse time.
- Avoid locking caused by: shared storage - clustered machines.
⚠ Transport buffer size is a coöperation between remote server and local driver. The local optimal buffer size can be different.
Resizing data in buffers a cause of performance problems.
Minize delays in the storage system.
- Multi tiers choice SSD- Harddisk -Tape, Local unshared - remote shared.
- Prefered: sequential or skipped sequential.
- tuning with Analytics is big block bulk sequential instead of random small block transactional usage.
⚠ Using Analtyics, tuning IO is quite different to transactional DBMS usage.
💣 This different non standard approach must be in scope with service management. The goal of sizing capacity is better understood than Striping for IO perfromance.
⚠ DBMS changing types
A mix of several DBMS are allowed in a EDWH 3.0. The speed of transport and retentionperiods are important considerations.
Technical engineering for details and limitations to state of art and cost factors.
RHL-1.4
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
⟲ RH-1.4.1 Info
Changing the way of informing.
Combining the data transfer, microservices, archive requirement, security requirements and doing it like the maturity of physical logistics.
It goes into the direction of a centralized managed approach while doing as much as possible decentralised.
Decoupling activities when possible to get popping up problems human manageable small.
 
Combining information connections.
There are a lot of ideas giving when combined another situation:
💡 Solving gaps between silos supporting the values stream.
Those are the rectangular positioned containers connecting between the red/green layers. (total eight internal - intermediates)
💡 Solving management information into the green/blue layers in every silo internal.
These are the second containers in every silo. (four: more centralised)
💡 Solving management information gaps between the silos following the value stream at a higher level .
These are the containers at the circle (four intermediates).
Consolidate that content to a central one.
🎭 The result is Having the management information supported in nine (9) containers following the product flow at strategic level. Not a monolithic central management information system but one that is decentralised and delegate as much as possible in satellites.
💡 The outer operational information rectangle is having a lot of detailed information that is useful for other purposes. One of these is the integrity processes.
A SOC (Security Operations Centre) is an example for adding another centralised one.
🎭 The result is Having the management information supported in nine (9) containers following the product flow at strategic level. Another eight (8) at the operational level another and possible more.
Not a monolithic central management information system but one that is decentralised and delegate as much as possible in satellites.
🤔 Small is beautiful, instead of big monolithic costly systems, many smaller ones can do the job better an more efficiënt. The goal: repeating a pattern instead of a one off project shop.
The duality when doing a change it will be like a project shop.
Containerization.
We are used to the container boxes as used these days for all kind of transport.
The biggest of the containerships are going over the world reliable predictable affordable.
Normal economical usage, load - reload, returning, many predictable reliable journeys.

The first containerships where these liberty ships. Fast and cheap to build. The high loss rate not an problem but solved by building many of those.
They were build as project shops but at many locations. The advantage of a known design to build over and over again.
They were not designed for many journeys, they were designed for the deliveries in war conditions.
project shop.
to cite:
This approach is most often used for very large and difficult to move products in small quantities.
...
There are cases where it is still useful, but most production is done using job shops or, even better, flow shops.
💣 The idea is that everything should become a flow shop even when not applicable. At ICT delivering software in high speed is seen as a goal, that idea is missing the data value stream as goal.
Containerization.
Everybody is using a different contact to the word "data". That is confusing when trying to do something with data. A mind switch is seeing it as information processing in enterprises.
As the datacentre is not a core business activity for most organisations there is move in outsourcing (cloud SAAS).
Engineering a process flow, then at a lot of point there will be waits.
At the starting and ending point it goes from internal to external where far longer waits to get artefacts or product deliveries will happen.
Avoiding fluctuations having a predictable balanced workload is the practical solution to become effciënt.
Processing objects, collecting information and delivering goes along with responsibilities.
It is not sexy, infact rather boring. Without good implementation all other activities are easily getting worthless. The biggest successed like Amazon are probably more based in doing this very well than something else.
The Inner Workings of Amazon Fulfillment Centers
Common used ICT patterns processing information.
For a long time the only delivery of an information process was a hard copy paper result.
Deliveries of results has changed to many options. The storing of information has changed also.
 
Working on a holistic approach on information processing starting at the core activities can solve al lot of problems. Why just working on symptoms and not on root causes?
💡 Preparing data for BI, Analytics has become getting an unnecessary prerequisite. Build a big design up front: the enterprise data ware house (EDWH 3.0).
Data Technical - machines oriented
The technical machines oriënted approach is about machines and the connections between them (network).
The service of delivering Infrastructure (IAAS) is limited to this kind of objects. Not how they are inter related.
The problem to solve behind this are questions of:
- Any machine has limitations with performance.
❓ Consideration question: is it cheaper to place additional machines (* default action) or analysing performance issues by human experts.
- Confidentiality and Availability.
The data access has to be managed, backups and software upgrades (PAAS). All with planned outage times. Planning and coordination involved parties.
❓ Consideration question: is it cheaper to place additional machines (* default action) or manage additional complexity by human experts for machine support.

🤔 A bigger organisations has several departments. Expectations are that their work has interactions and there are some central parts.
Sales, Marketing, Production lines, bookkeeping, payments, accountancy.
🤔 Interactions with actions between all those departments are leading to complexity.
🤔 The number of machines and the differnces in stacks are growing fast. No matter where these logical machines are.
For every business service an own dedicated number of machines will increase complexity.
The information process flow has many interactions, inputs, tranformtions and outputs.
- ⚠ No relationsship machines - networking. The problem to solve that will popup at some point.
- ⚠ Issues by datatype conversions, integrity validation when using segragated sources (machines).
💡 Reinvention of a pattern. The physical logistic warehouse approach is well developed and working well. Why not copy that pattern to ICT? (EDWH 3.0)
What is delivered in a information process?
The mailing print processing is the oldest Front-end system using Back-end data. The moment of printing not being the same of the manufactured information.
Many more frontend deliveries have been created recent years. The domiant ones becoming webpages and apps on smartphones.
A change in attitude is needed bu still seeing it as a delivery needed the quality of infomration by the process.
Change data - Transformations
A data strategy helping the business should be the goal. Processing information as "documents" having detailed elements encapsulated.
Transport & Archiving aside producing it as holistic approach.

Logistics using containers.
The standard approach in information processing is focussing on the most detailed artefacts trying to build a holistic data model for all kind of relationships.
This is how goods were once transported as single items (pieces). That has changed into: containers having encapsulated good.
💡 Use of labelled information containers instead of working with detailed artefacts.
💡 Transport of containers is requiring some time. The required time is however predictable.
Trusting that the delivery is in time, the quality is conform expectations, is more efficiënt than trying to do everything in real time.

Informations containers have arrived almost ready for delivery having a more predictable moment for deliveriy to the customer.
💡 The expected dleivery notice is becoming standard in physical logistics. Why not doing the same in adminsitrative processes?
RH-1.5
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
⟲ RH-1.5.1 Info
Some mismatches in a value stream.
Aside all direct questions from the organisation many external requirements are coming in.
A limited list to get en idea on regulations having impact on the adminsitrative information processing.
business flow & value stream.

Having a main value stream from left to right, the focus can be top down with the duality of processes - transformations and the product - information.
Complicating factor is that:
✅ Before external can be retrieved the agreement on wat is to retrieve must be on some level.
✅ Before the delivery can be fulfilled the request on what tot deliver must be there.

Having the same organisation, the focus can be bottom up with the layers in silos and separation of concerns.
Complicating factor is that:
❓ In the centre needed government information is not coming in by default. The request for that information is not reaching the operational floor.
😲 cooperation between the silos responsible for a part of the operating process are not exchanging needed information on the most easy way by default.
EDW development approach and presetation
BI DWH, datavirtualization.
Once upon a time there were big successes using BI and Analytics. The success were achieved by the good decisions, not best practices, made in those projects.
To copy those successes the best way would be understanding those decisions made. As a pity these decisions and why the were made are not published.

The focus for achieving success changed in using the same tools with those successes.
BI Business Intelligence has for long claiming being the owner of the E-DWH.
Typical in BI is almost all data is about periods. Adjusting data matching the differences in periods is possible in a standard way.
The data virtualization is build on top of the "data vault" DWH 2.0 dedicated build for BI reporting usage.
It is not virtualization on top of the ODS or original data sources (staging).

Presenting data using figures as BI.
The information for managers commonly is presented in easily understandable figures.
When used for giving satisfying messages or escalations for problems there is bias to prefer the satisfying ones over the ones alerting for possible problems.
😲 No testing and validation processes being necessary as nothing is operational just reporting to managers.
💡 The biggest change for a DWH 3.0 approach is the shared location of data information being used for the whole organisation, not only for BI.
 
The Dimensional modelling and the Data Vault for building up a dedicated storage as seen as the design pattern solving all issues.
OLap modelling and reporting on the production data for delivery new information for managers to overcome performance issues.
A more modern approach is using in memory analytics. In memory analytics is still needing a well designed data structure (preparation).
 
😱 Archiving historical records that may be retrieved is an option that should be regular operations not a DWH reporting solution.
The operations (value stream) process is sometimes needing information of historical records.
That business question is a solution for limitations in the operational systems. Those systems were never designed and realised with archiving and historical information.
⚠ Storing data in a DWH is having many possible ways. The standard RDBMS dogma has been augmented with a lot of other options.
Limitations: Technical implementations not well suited because the difference to an OLTP application system.
Reporting Controls (BI)
The understandable goal of BI reporting and analytics reporting is rather limited, that is:
📚 Informing management with figures,
🤔 so they can make up their mind on their actions - decisions.
The data explosion. The change is the ammount we are collecting measuring processes as new information (edge).
📚 Information questions.
⚙ measurements data figures.
🎭 What to do with new data?
⚖ legally & ethical acceptable?
Adding BI (DWH) to layers of enterprise concerns.
Having the three layers, separation of concern :
- operations , business values stream (red)
- documentation (green)
- part of the product describing it for longer period
- related to the product for temporary flow reasons
- control strategy (blue))
At the edges of those layers inside the hierarchical pyramid interesting information to collect for controlling & optimising the internal processes.
For strategic information control the interaction with the documentational layer is the first one being visible.

Having the four basic organisational lines that are assumed to cooperate as a single enterprise in the operational product value stream circle, there are gaps between those pyramids.
 
Controlling them at a higher level is using information the involved parties two by two, are in agreement. This is adding another four points of information.
Consolidating those four interactions point to one central point makes the total number of strategic information containers nine.
⚠ ETL ELT - No Transformation.

Classic is the processing order:
⌛ Extract, ⌛ Transform, ⌛ Load.
For segregation from the operational flow a technical copy is required.
Issues are:
- Every Transform is adding logic that can get very complicated. Unnecesary complexity is waste to be avoided.
- The technical copy involves conversions between technical systems when they are different. Also introduce integrity questions by synchronisation. Unnecesary copies are waste to be avoided.
- Transforming (manufacturing) data should be avoided, it is the data-consumer process that should do logic processing.
Translating the physical warehouse to ICT.

All kind of data (technical) should get support for all types of information (logical) at all kinds of speed.
Speed, streaming, is bypassing (duplications allowed) the store - batch for involved objects. Fast delivery (JIT Just In Time).
💣 The figure is what is called lambda architecture in data warehousing.
lambda architecture. (wikipedia).
With physical warehouses logistics this question for a different architecture is never heard of.
The warehouse is supposed to support the manufacturing process.
For some reason the data warehouse has got reserved for analytics and not supporting the manufacturing process.
RH-1.6
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
⟲ RH-1.6.1 Info
butics
Maturtity Level 1-5
Why -still- discuss IT-business alignment?
4. In search of mythical silver bullet
5. Focusing on infrastructure/architecture
7 Can we move from a descriptive vehicle to a prescriptive vehicle?
(see link with figure 👓)
💣 This CMM level is going on since 1990. Little progress in results are made. those can be explained by the document analyses and the listed numbers.
Going on the way to achieve the levels by fullfilling some action list as having done is a way to not achieve those goals. Cultural behanvior is very difficult to measure. Missing in IT is te C for communication: ICT.
RH-2 Details systems ZARF tactical 6x6 reference framework
RH-2.1
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
⟲ RH-2.1.1 Info
butics
RH-2.2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
⟲ RH-2.2.1 Info
Archiving, Retention policies.
Information is not only active operational but also historical what has happened, who has execute, what was delivered, when was the delivery when was the purchase etc.
That kind of information is often very valuable but at the same time it is not well clear how to organize that and who is responsible.
💣 Retention policies, archiving information is important do it well, the financial and legal advantages are not that obvious visible. Only when problems are escalating to high levels it is clear but too late to solve.
When being in some financial troubles, cost cutting is easily done.
Historical and scientific purposes, moved out off any organisational process.
An archive is an accumulation of historical records in any media or the physical facility in which they are located.
Archives contain primary source documents that have accumulated over the course of an individual or organization's lifetime, and are kept to show the function of that person or organization.
Professional archivists and historians generally understand archives to be records that have been naturally and necessarily generated as a product of regular legal, commercial, administrative, or social activities.
The word record and word document is having a slightly different meaning in this context than technical ICT staff is used to.
In general, archives consist of records that have been selected for permanent or long-term preservation on grounds of their enduring cultural, historical, or evidentiary value.
Archival records are normally unpublished and almost always unique, unlike books or magazines of which many identical copies may exist.
This means that archives are quite distinct from libraries with regard to their functions and organization, although archival collections can often be found within library buildings.
Additional information container attributes.
😉 EDW 3.0 Every information container must be fully identifiable. Minimal by:
- a logical context key
- moment of relevance
- moment received, available at the ware house
- source received information container.
When there are compliancy questions on information with this kind of compliancy questions it is often assumed to be an ICT problem only. Classic applications are lacking thes kind of attributes with information.
💡 Additional information container attributes supporting implementations defined retention policies.
Every information container must have for applicable retention references :
- Normal operational visibility moments:
- registered in the system
- information validity start
- information validity end
- registration in system to end
- Legal change relevance:
- legal case registered in system started
- registration for legal case in system to end
- Internal extended archive for purposes:
- registration for archiving purposes in system to end
Common issues when working for retention periods.
⚠ An isolated archive system in complexity reliability and availability being a big hurdle, high impact.
⚠ Relevant information for legal purposes, moved out from manufacturing process and not being available anymore in legal cases, is problematic.
⚠ Impact by cleaning as soon as possible is having high impact. The GDPR states it should be deleted as soon as possible.
This law is getting much attention and is having regulators. Archiving information for longer periods is not directly covered by laws, only indirect.
Government Information Retention.
Instead of a fight how it should be solved there is a fight somebody else is to blame for missing information.

Different responsible parties have their own opinion how conflict in retention policies should get solved.
🤔 Having information deleted permanent there is no way to recover when that decision is wrong.
🤔 The expectation it would be cheaper and having better quality is a promise without warrrants.
RH-2.3
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
⟲ RH-2.3.1 Info
Administrative Value Stream Mapping Symbol Patterns.
Help in abstracting ideas is not by long text but using symbols and figures.
A blueprint is the old name for doing a design before realisation.
- Value stream mapping has symbols to help in abstracting ideas.
- Structured Program, coding, has the well known flow symbols.
- Demo has a very detailed structure on interactions with symbols.
What is missing is something in between that is helping in the value stream of administrative processing.
Input processing:
Retrieve multiple well defined resources.
Transform into a data model around a subject.
The result is similar to a star model. The differenes are that is lacking some integrity and constraint definitions.
Retrieve a data model around a subject.
Transform this in a denormalised one with possible logical adjustments.
Moving to in memory processing for analytics & reporting, denormalisation is the way to achieve workable solutions.
Retrieve multiple unstructured resources.
Transform (transpose) into multiple well defined resource.
A well defined resource is one that can be represented in rows columns. The columns are identifiers for similar logical information in some context.
Execute Business Logic (score):
Retrieve a data model around a subject.
Execute business logic generating some result.
This type of processing is well known for RDBMS applications. The denormalisation is done by the application.
Retrieve denormalised data for subject.
Execute business logic generating some result.
Moving to in memory processing for analytics & reporting, denormalisation is the way to achieve workable solutions.
Retrieve historical results (business) what has been previous scored. Execute business logic generating some result.
The is monitoring block generates a log-file (technical), historical results (business) and does a halt of the flow when something is wrong.
Logging: / Monitoring:
-
Retrieve a data model around a subject. Apply businsess rules for assumed validity.
This logging block generates a log-file. The period is limited, only technicial capacity with possible restarts to show.
Does a line-halt of the flow when something is wrong.
-
Retrieve a result from an executed business logic process. Apply businsess rules for assumed validity.
This monitoring block generates a log-file (technical), historical results (business).
Does a line-halt of the flow when something is wrong.
Output, delivery:
-
From a weel defined resource, propagate to, from this processing context, external one.
A logical switch is included with the goal of preventing sending out information when that is not applicable for some reason.
Administrative proposed standard pattern.
📚 The process split up in four stages of prepare request (IV, III) and the
delivery (I, II). The warehouse as starting point (inbound) and end point (outbound).
The request with all necessary preparations and validations going through IV and III.
The delivery with all necessary quality checks going through I and II.
SDLC life cycle steps - logging , monitoring.
Going back to the sdlc product life, alc model type 3. This is a possible implementation of the manufacturing I, II phases.
💡 There are four lines of artefacts collections at releases what will become the different production versions.
- collecting input sources into a combined data model.
- modifying the combined data model into a new one suited for the application (model).
- running the application (model) on the adjusted suited data creating new information, results.
- Delivering verified results to an agreed destinationt in an agreed format.

💡 There are two points that are validating the state en create additional logging. This is new information.
- After having collected the input sources, technical and logical verfication on what has is there is done.
- Before delviering the results technical and logical verfication on what is there is done.
This is logic having business rules. The goal is application logging and monitoring in business perspective.
When something is badly wrong, than halting the process flow is safety mitigation preventing more damage.
There is no way to solve this by technical logfiles generated by tools like a RDBMS.
💡 The results ar collected archived (business dedicated). This is new information.
- After having created the result, but before delivering.
- It usefull for auditing purpused (what has happended) and for predcitive modelling (ML) .
RHL-2.4
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
⟲ RH-2.4.1 Info
butics
RH-2.5
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
⟲ RH-2.5.1 Info
butics
RH-2.6
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
⟲ RH-2.6.1 Info
butics
RH-3 Details systems ZARF tactical 6x6 reference framework
RH-3.1 Data, gathering information on processes.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
⟲ RH-3.1.1 Info
butics
RH-3.2 Enterprise engineering, valuable processing flows.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
⟲ RH-3.2.1 Info
butics
RH-3.3 Information - data - avoiding process fluctuations.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
⟲ RH-3.3.1 Info
butics
RH-3.4 Edwh 3.0 - Data: collect - store - deliver.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
⟲ RH-3.4.1 Info
butics
RH-3.5 Patterns by changing context, changing technology.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
⟲ RH-3.5.1 Info
butics
RH-3.6 Change data - Transformations
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
⟲ RH-3.6.1 Info
butics
© 2012,2020,2026J.A.Karman