Design Data - Information flow
information, Enterprise core objects.
Data, gathering information on processes.
The data explosion. The change is the ammount we are collecting measuring processes as new information (edge).
📚 Information questions.
⚙ measurements data figures.
🎭 What to do with new data?
⚖ legally & ethical acceptable?
🔰 Most logical
back reference.
Contents
Reference | Topic | Squad |
Intro | Data, gathering information on processes. | 01.01 |
project-flow shop | The project shop a lean approach repeated actions. | 02.01 |
EDWH 3.0 logistics | Logistics of the EDWH - Data Lake - Data Mesh - EDWH 3.0 | 03.01 |
ramp up | Ramping up the administrative information line | 04.01 |
mimatch | Some mismatches in a value stream. | 05.01 |
What next | Change data - Transformations. | 06.00 |
| Combined pages as single topic. | 06.02 |
Combined pages as single topic.
👓 info types different types of data
🚧 Value Stream of the data as product
👓 transform information data inventory
👓 data silo - BI analytics, reporting
Progress
- 2012 week:44
- Moved the legal references list to the new inventory page.
- Added possible mismatches in the value stram wiht a BISL reference demand supply.
- 2019 week:48
- Page converted, added with all lean and value stream idea´s.
- Aside the values stream and EDWH 3.0 approach links are added tot the building block patterns SDLC and Meta.
- The technical improvements external on the market are the options for internal improvements.
The project shop a lean approach repeated actions.
The project shop is associated with not possible applying lean thoughts. Does it or are there situations where new technology are implementing a lean working way.
 
project shop, moving the unmovable.
It is using a great invention of process improvement over and over again.
That is: the dock. Building in the water is not possible. Building it ashore is giving the question how to get it into the water safely.
🔰 Reinvention of patterns.
Moving something that is unmovable.
Changing something that has alwaus be done tath wasy.
Minimizing time for road adjustment, placing tunnel. Placing it when able to move done in just 3 days. Building several months.
See time-lapse. 👓 Placing the tunnel was a success, a pity the intended road isn´t done after three years.
 
The project approach of moving the unmovable has been copied many times with the intended usage afterwards.
rail bridge deck cover
The approach is repeatable.
💡 Reinvention of patterns. Moving something that is unmovable.
🎭When a project shop is better in place, why not copy this at ICT?
Administration information flow.
Seeing this way of working the association is to administration work moving the papers arround.
Flow lines are often the best and most organized approach to establish a value stream.
The "easiest" one is an unstructured approach. The processes are still arranged in sequence; however, there is no fixed signal when to start processing a part.
💡 Reinvention of patterns. Using the information flow as assembly line.
🎭 When a flow line is a fit for an administrative process, why not copy this at ICT?
🎭 When an administrative process is associated to administrative tags (eg prodcut description) being processed why not have them related to each other?
Administrative Value Stream Mapping Symbol Patterns.
Help in abstracting ideas is not by long text but using symbols and figures. A blueprint is the old name for doing a design before realisation.
- Talue stream mapping has symbols to help in abstracting ideas.
- Structured Program, coding, has the well known flow symbols.
- Demo has a very detailed structure on interactions with symbols.
What is missing is something in between that is helping in the value stream of administrative processing.
 
Input processing:
- Retrieve multiple well defined resources.
Transform into a data model around a subject.
The result is similar to a star model. The differenes are that is lacking some integrity and constraint definitions.
- Retrieve a data model around a subject.
Transform this in a denormalised one with possible logical adjustments.
Moving to in memory processing for analytics & reporting, denormalisation is the way to achieve workable solutions.
- Retrieve multiple unstructured resources.
Transform (transpose) into multiple well defined resource.
A well defined resource is one that can be represented in rows columns. The columns are identifiers for similar logical information in some context.
Execute Business Logic (score):
- Retrieve a data model around a subject.
Execute business logic generating some result.
This type of processing is well known for RDBMS applications. The denormalisation is done by the application.
- Retrieve denormalised data for subject.
Execute business logic generating some result.
Moving to in memory processing for analytics & reporting, denormalisation is the way to achieve workable solutions.
- Retrieve historical results (business) what has been previous scored. Execute business logic generating some result.
The is monitoring block generates a log-file (technical), historical results (business) and does a halt of the flow when something is wrong.
Logging: / Monitoring:
- Retrieve a data model around a subject. Apply businsess rules for assumed validity.
This logging block generates a log-file. The period is limited, only technicial capacity with possible restarts to show.
Does a line-halt of the flow when something is wrong.
- Retrieve a result from an executed business logic process. Apply businsess rules for assumed validity.
This monitoring block generates a log-file (technical), historical results (business).
Does a line-halt of the flow when something is wrong.
Output, delivery:
- From a weel defined resource, propagate to, from this processing context, external one.
A logical switch is included wiht the goal of preventing sending out information when that is not applicable for some reason.
Logistics of the EDWH - Data Lake - Data Mesh -
EDWH 3.0
Processing objects, processing information goes along with responsibilities. There is an origin of the information and a consumer of combined information lines.
 
Classic DataWareHousing.
⚠ A data warehouse is at the moment siloed to reporting tasks. Reporting in dashboards and reports so managers are making up their mind with those reports as the "data".
Other usage for a data warehouse is seen as problematic when it used for operational informational questions may be involved with AI better Machine learning bypassing those managers as the decision makers.
👓
❓ The technology question wat kind of DBMS should be uses in a monolithic system for management reporting is a strategy question asked.
❓ Data curation before being used in a monolithic system for management reporting is a strategy question asked.
❓ Historical information in this monolithic system for management reporting is a question.
❓ Connecting to analytical usage in an operational flow in this monolithic system for management reporting is a question.
DataWareHousing, Information flow based.
Repostioning the datawarehouseas part of an operational flow makes more sense. A compliancy gap getting a solution:
✅ The two vertical lines are managing who´s has access to what kind of data, authorized by data owner, registered data consumers, monitored and controlled.
In the figure:
- The two vertical lines are managing who´s has access to what kind of data, authorized by data owner, registered data consumers, monitored and controlled.
- The confidentiality and integrity steps are not bypassed with JIT (lambda).
In a figure:
The following consumers are also valid for the warehouse:
- Archive
- Operations
- ML operations
A very different approach in building up this enterprise information data ware house. Axiomas:
💡 No generic data model for relations between information elements - information containers.
💡 Every information container must be fully identifiable. Minimal by:
- a logical context key
- moment of relevance
- moment received, available at the ware house
- source received information container.
💡 Every information container must have a clear ownership.:
- The owner is accountable for budget.
- The owner is responsible for compliant use of information.
For being fully identifiable a well designed stable naming convention is required.
Selfservice - Managed
Self service sounds very friendly, it is a euphemism for no service. Collecting your data, processing your data, yourself.
The advantage for the customer is picking what is felt convenient found on some shelf. The disadvantages are:
- no validation, check on what is felt convenient is also applicable.
- Limitation to only not harmfull stuff when used increectly (classifications public restricted).
- It can become very messy when the inventory is not checked regulary.
Have it prepared transported for you so it can processed for you.
The advantages are a well controlled environment that also is capable of handling more sensitive stuff (confidential secres).
Building blocks in other design lines.
Rebuilding the process flow, needed is :
👓 DTAP Multiple dimensions processes by layers
👓 ALC type 3 low code ML process development
👓 data administration describing modelling data
👓 Security - modelling access information
Ramping up the administrative information line
An administrative process is similar to the physical material flow.
There is a reason to process something (trigger). That reason is the value chain.
 
Administrative process, Information flow based.
Any request starts (pull kanban).
The request with all necessary preparations and validations going through IV and III.
How to Ramp Up a Kanban System - Part 1: Preparation
Designing a kanban system on paper is much easier than implementing it on the shop floor.
When ready the delivery with his own processes can proceed (frontend).
The delivery with all necessary quality checks going through I and II.
Overall, this debugging process will also help you with the "check" and "act" of the PDCA sequence.
If you do this debugging, you will learn if the system actually works and if it is (hopefully) better than what you had before.
Don´t take it for granted that just because you changed something, it must be better than before!
Administrative process, differences to physical objects.
-
⚠ Administrative information is easily duplicated.
Using ICT duplication is a standard action. Making all those copies gives some feeling of independncy.
The overall effect likely losing the connection tot the value chain. Technical ICT Hypes are a signal of this problem.
-
⚠ Administrative information often is not complete in needed material supply.
When assembling a physical prodcut the needed material planning is clear. Administrative information usually requires additional input resources.
Those additional resources are often external chains to connect. Problems arise when those input resources are not valid or changing when not expected to change.
Frustrations to this kind are common.
Administrative proposed standard pattern.
📚 The process split up in four stages of prepare request (IV, III) and the
delivery (I, II). The warehouse as starting point (inbound) and end point (outbound).
The request with all necessary preparations and validations going through IV and III.
The delivery with all necessary quality checks going through I and II.
SDLC life cycle steps - logging , monitoring.
Going back to the sdlc product life, alc model type 3. This is a possible implementation of the manufacturing I, II phases.
💡 There are four lines of artefacts collections at releases what will become the different production versions.
- collecting input sources into a combined data model.
- modifying the combined data model into a new one suited for the application (model).
- running the application (model) on the adjusted suited data creating new information, results.
- Delivering verified results to an agreed destinationt in an agreed format.
💡 There are two points that are validating the state en create additional logging. This is new information.
- After having collected the input sources, technical and logical verfication on what has is there is done.
- Before delviering the results technical and logical verfication on what is there is done.
This is logic having business rules. The goal is application logging and monitoring in business perspective.
When something is badly wrong, than halting the process flow is safety mitigation preventing more damage.
There is no way to solve this by technical logfiles generated by tools like a RDBMS.
💡 The results ar collected archived (business dedicated). This is new information.
- After having created the result, but before delivering.
- It usefull for auditing purpused (what has happended) and for predcitive modelling (ML) .
Some mismatches in a value stream.
Aside all direct questions from the organisation many external requirements are coming in.
A limited list to get en idea on regulations having impact on the adminsitrative information processing.
 
business flow & value stream.
Having a main value stream from left to right, the focus can be top down with the duality of processes - transformations and the product - information.
Complicating factor is that:
✅ Before external can be retrieved the agreement on wat is to retrieve must be on some level.
✅ Before the delivery can be fulfilled the request on what tot deliver must be there.
Having the same organisation, the focus can be bottom up with the layers in silos and separation of concerns.
Complicating factor is that:
❓ In the centre needed government information is not coming in by default. The request for that information is not reaching the operational floor.
😲 cooperation between the silos responsible for a part of the operating process are not exchanging needed information on the most easy way by default.
BISL Business Information Services Library.
Bisl is used for a demand supply chain. Often going along with internal business and external outsourcec IT services. Nice to see is a seperation of concerns in a similar way, placing the high level drivers in the center.
The framework describes a standard for processes within business information management at the strategy, management and operations level.
BiSL is closely related to the ITIL and ASL framework, yet the main difference between these frameworks is that ITIL and ASL focus on the supply side of information (the purpose of an IT organisation), whereas BiSL focuses on the demand side (arising from the end-user organisation
The demand side focus for some supply is a solution for the supposed mismatch business & ICT. The approach for that mismatch is an external supplier.
Indeed there are gaps. The question should be is there are mismatch or have the wrong questions been asked?
In the values stream flow there are gaps between:
- operational processes, in the chain of the product transformation - delivery.
- Delivering strategical management information assuming the silo´s in the transformation chains -delivery are cooperating.
- Extracting, creating management information within the silo´s between their internal layers.
Different responsible parties have their own opinion how those conflict should get solved.
The easy way is outsourcing the problem to an external party, a new viewpoint coming in.
🤔 The expectation this would be cheaper and having better quality is a promise without warrants .
🤔 Having no alignment between the silo´s there is a question on the version of the truth.
When these issues are the real questions real problems to solve:
- Solve the alignment between at operational processes, wiht the value stream of the product. Both parties need to agree as single version of the truth.
- Solve the alignment in extracting, creating management information within the silo´s between their internal layers. There are two lines of seperations in context.
- Use the management information wihtin the silos in consolidated information in delivering strategical management information.
Change data - Transformations
Seeing the values stream within an administrative product is a different starting point for completely new approaches.
The starting point is redesigning what is not working well. Not automatically keeping things doing as always have been done. Also not changing things because of wanting to change something.
 
Design thinking.
It is a common misconception that design thinking is new. Design has been practiced for ages: monuments, bridges, automobiles, subway systems are all end-products of design processes.
Throughout history, good designers have applied a human-centric creative process to build meaningful and effective solutions.
The design thinking ideology is following several steps.
Defintion: The design thinking ideology asserts that a hands-on, user-centric approach to problem solving can lead to innovation, and innovation can lead to differentiation and a competitive advantage.
This hands-on, user-centric approach is defined by the design thinking process and comprises 6 distinct phases, as defined and illustrated below.
See link at figure 👓.
 
Those six phases are in line with what the crisp-dm model states. Wat is missing when comparing this with the PDCA cycle is the Check- Verify of it works as expected after implementation.
Combining information connections between silos & layers.
💡 Solving gaps between silos in the organisation is supporting the values stream.
Having aligned information by involved parties it is avoiding different versions of the truth.
It is more easy to consolidate that kind of information to a central managed (bi analytics) tactical - strategical level.
The change to achieve this is one of cultural attitudes. That is a top down strategical influence.
Maturtity Level 1-5
Why -still- discuss IT-business alignment?
4. In search of mythical silver bullet
5. Focusing on infrastructure/architecture
7 Can we move from a descriptive vehicle to a prescriptive vehicle?
(see link with figure 👓)
💣 This CMM level is going on since 1990. Little progress in results are made. those can be explained by the document analyses and the listed numbers.
Going on the way to achieve the levels by fullfilling some action list as having done is a way to not achieve those goals. Cultural behanvior is very difficult to measure. Missing in IT is te C for communication: ICT.
The Philosophy and Practicality of Jidoka
Diving deep into the Toyota philosophy, you could see this as JIT telling you to let the material flow, and jidoka telling you when to stop the flow.
This is a bit like the Chinese philosophical concept of Ying and Yang, where seemingly opposite or contrary forces may actually be complementary.
The same applies here. JIT encourages flow, and Jidoka encourages stops, which seems contrary. However, both help to produce more and better parts at a lower cost.
Unfortunately, JIT gets much, much more attention as it is the glamorous and positive side, whereas jidoka is often seen as all about problems and stops and other negative aspects.
Yet, both are necessary for a good production system.
💣 Ignoring the holistic view of the higher goal, only on a detailed aspect like JIT can make things worse not better.
Combined pages as single topic.
👓 info types different types of data
✅ Value Stream of the data as product
👓 transform information data inventory
👓 data silo - BI analytics, reporting
🔰 Most logical
back reference
© 2012,2020 J.A.Karman