logo jabes

Align Technology service


🎭 Summary & Indices Elucidation 👁 Foreword Vitae 🎭

👐 C-Steer C-Serve C-Shape 👁 I-C6isr I-Jabes I-Know👐
👐 r-steer r-serve r-shape 👁 r-c6isr r-jabes r-know👐

🔰 Contents E_SDLC  C_SAFE  E_Inform C_Mdata CMM0-4IT 🔰
  
🚧  👁-GAP F-SDLC F_SAFE F_Inform F_Mdata CMM3-4IT 🚧
  
🎯 ✅-GAP CI-SDLC CI-SAFE CI_Inform CI_Mdata CMM5-4IT 🎯


R-1 Basics & Infrastructure ICT service: Gaps


R-1.1 Contents

R-1.1.1 Global content
vmap sdlc design sdlc devops bianl devops bpm design bpm design bianl The organisation powered by ICT in a ship like constellation. The engines (data centre) out of sight below visibility. Serving multiple customers (multi tenancy) for the best performance and the best profits on all layers.

There are six pillars in a functional and technical layer. Within the the three internal pillars linked access is possible by an imagemap over the given figure.

When wanting going logical backward:
🔰 Too fast .. previous.
R-1.1.2 Guide reading this page
Technology vs Organisation - Gap
This page is about Technology. Technology is the enabler in a service providing role for missions of the organisation. When a holistic approach for organisational missions and organisational improvements is wanted, the service by this technology pillar should be at some level.
💣 The first assumption with all fast technology evolvements is that: there are no technology impediments. I might be true for technology itself but having found all those gaps it is terrible wrong for the service by technology.
What to do about this?
  1. Acknowledge the gaps, issues in technology service.
    This will be easily seen as blaming with negativity. It is hurting the safe existing situation in what has always be done by people.
  2. Getting a shared mind seat for need in improvement.
    Letting in ideas for improvements from within the enterprise
  3. Doing real the changes with the improvement goals.
🤔 The priorities are set by the enterprise, organisational missions not by technology.

The project triangle at Sofware Development life Cycle
Rte devil triangle Wanting artefacts deployed, ⬅
💣 forgot the goal at production with the artefacts quality and their impact.
Wanting deployed into production, ⬅
💣 forgetting to have selected verified well functional artefacts.
Wanting at production artefacts, ⬅
💣 forgetting deployment, lifecycle.

Technology to Organisation - Service
Alignments for:

The Technology, Identities triangle for Business applications
Security at rest devil triangle Identities defined by technology, ⬅
💣 forgetting the functionality goal in business applications.
Defining identities at applications, ⬅
💣 failing to have generic technology supporting automated definitions.
Getting technology for applications, ⬅
💣 forgetting how identities get their functionality in business applications.

Optimisation processes
😉 Required are closed loops for: Working into an approach for optimized business and technology situation, there is gap in knowledge and tools. The proposal to solve those gaps is "Jabes". The idea for using a virtual shop-floor "Jabsa".

R-1.1.3 Local content
Reference Squad Abbrevation
R-1 Basics & Infrastructure ICT service: Gaps
R-1.1 Contents contents Contents
R-1.1.1 Global content
R-1.1.2 Guide reading this page
R-1.1.3 Local content
R-1.1.4 Progress
R-1.2 Lean Agile: SDLC challenge techgap_02 E_SDLC
R-1.2.1 The big elephant at SDLC misunderstandings
R-1.2.2 Legal SDLC obligations
R-1.2.3 Releases, SDLC, the technology mindset
R-1.3 Lean Agile: Safety perspectives techgap_03 C_SAFE
R-1.3.1 The safety challenge, cyber administrative
R-1.3.2 Legal Safety obligations
R-1.3.3 Safety, cyber, the technology mindset
R-1.4 Information processing functionality techgap_04 E_Inform
R-1.4.1 Information, what is it about?
R-1.4.2 Operational plane - value streams
R-1.4.3 Analytical Plane - improving flow
R-1.4.4 Research using information
R-1.5 Master Data: Communication Cooperation techgap_05 C_Mdata
R-1.5.1 Changing the wrong thing: "developer problem"
R-1.5.2 Focus on the wrong thing: "developer interests"
R-1.5.3 Missed: Why Systems engineering
R-1.6 Maturity 0: ICT service impact NOT understood techgap_06 CMM0-4IT
R-1.6.1 Historical lockin release management
R-1.6.2 Historical lockin safety management
R-1.6.3 Historical lockin information flows
R-1.6.4 Historical lockin master data
R-2 ICT service gaps Understanding: getting them solved
R-2.1 Seeing ICT Service Gap types techsol_01 👁-GAP
R-2.1.1 Deducing the reason for ICT service gaps
R-2.1.2 The Administrative Cyber System Life Cycle
R-2.1.3 Safety at Administrative Cyber Systems
R-2.1.4 Administrative Cyber Systems, the Operational Plane
R-2.1.5 Master Data, Communication: Administrative Cyber Systems
R-2.2 Solving: The ICT-SDLC challenge techsol_02 F-SDLC
R-2.2.1 System Life Cycles at multiple layers
R-2.2.2 LCM Basics for Business Applications
R-2.2.3 Advanced LCM topics for Business Applications
R-2.3 Solving: Safety perspectives techsol_03 F_SAFE
R-2.3.1 Safety perspectives within the organisation
R-2.3.2 Safety perspectives combined to release management
R-2.3.3 Safety perspectives privileged accounts
R-2.4 Solving: Historical Information, risk & impacts techsol_04 F_Inform
R-2.4.1 ocesss
R-2.4.2 nology pillar
R-2.4.3 n process
R-2.5 Solving: Communication Cooperation techsol_05 F_Mdata
R-2.5.1 s Security
R-2.5.2 ring & Analysing
R-2.5.3
R-2.6 Maturity 3: ICT service solutions for gaps techsol_06 CMM3-4IT
R-2.6.1 Leaving the comfort zones
R-2.6.2 Detailed information for service solutions
R-2.6.3 The silver bullet
R-2.6.4 Getting ITC in control
R-3 ICT service adding value to missions of organisations
R-3.1 Avoiding ICT Service Gap types techval_01 ✅-GAP
R-3.1.1 Knowing understanding by ontology
R-3.1.2 Start with what drives success
R-3.1.3 Looking for a chart representing the enterprise area
R-3.1.4 Jabes Vision: Change ICT into a service culture ITC
R-3.2 Continous improvement of Systems techval_02 CI-SDLC
R-3.2.1 Knowing, understanding by measuring
R-3.2.2 Start with understanding the requirements
R-3.2.3 Looking at, understanding quality management
R-3.2.4 Jabes vision: Servicing the Chief Product Officer
R-3.3 Continous improvement of Safety techval_03 CI-SAFE
R-3.3.1 Safety some principles for realisations
R-3.3.2 Safety several basic attention areas
R-3.3.3 Safety standards, body of knowledge
R-3.3.4 Jabes vision: Service to Chief Safety Officer
R-3.4 Information processing adding value techval_04 CI-Inform
R-3.4.1 Focus: value stream at the shopfloor
R-3.4.2 An extensive closed loop framework (I)
R-3.4.3 An extensive closed loop framework (II)
R-3.4.4 Retrospective "Closed Loops", Genba-2
R-3.5 Adapting Cooperative Communication techval_05 Jabes-using
R-3.5.1 Genba 1,2,3 using the virtual shopfloor
R-3.5.2 An extensive Data literacy framework (I)
R-3.5.3 An extensive Data literacy framework (II)
R-3.5.4 Retrospective lean: Data literacy, Genba-1
R-3.6 Maturity 5: ICT solutions adding value techval_06 CMM5-4IT
R-3.6.1 Mindset prerequisites
R-3.6.2 An extensive DevOps framework (I)
R-3.6.3 An extensive DevOps framework (II)
R-3.6.4 Retrospective lean: DevOps, Genba-3
R-3.6.4 Following steps

R-1.1.4 Progress
done and currently working on:

Planning to do:

man_elephant.jpg

R-1.2 Lean Agile: SDLC challenge

Any Development Life Cycle also that of Software does have assumptions. A well known staging standard:
  1. Develop
  2. Test
  3. Acceptance
  4. Production
This DTAP staging has many variants in words but the prinicples are the same. The assumption is that this would be easy understood by everyone on what to do how to do and why it should be done.
R-1.2.2 The big elephant at SDLC misunderstandings
How to do SDLC
There are many issues. Root causes by misunderstandings, wrong perceptions on:
🤔 Business intelligence, analytics is not associated with release management, just think of the trying to manage all those content in excel spreadsheets.
Business applications are the usual context everybody is associating to SDLC. These are usual monolithic and siloed with a missing design in business components and what infrastructure is used. There are two types of business components:
  1. Business information, data. The assets that are input and after processing the result.
  2. Business rules, code. How transformations on information are expected to work by processing for the good bad and ugly.
➡ 💣 When this is not seen in the solution it is not about a business application.
Infrastructure is the historical association for information technology.
A separation in concerns:
  1. Platforms, tools, middleware, DBMS systems, ERP systems, file transfers, messaging, web traffic. These have their own approach of release management because there are peculiar dependencies. One of the dependencies is continuity for business applications.
  2. Operating systems the software component for enabling hardware systems. The hardware is more and more virtualised by software.
  3. Network, adapters, line connections, routers, firewalls, segmentations for safety isolations.
Although this seems a logical separation of concerns, there are still a lot of misunderstandings and heated discussions. A platform, dbms is for perspective of the operating system an "application" for a "business application" it is a required infrastructure component.
➡ 💣 When this kind of discussion is seen the expectation to achieve a compliant environment is a mission impossible.
Divide and conquer
😉 Divide a bigger issue into smaller ones to have each of them better understood. (Edsger W. Dijkstra EWD709)
Because we struggle with the small sizes of our heads as long as we exist, we need intellectual techniques that help us in mastering the complexity we are faced with: When faced with an existing design, you can apply them as a checklist; when designing yourself, they provide you with strong heuristic guidance. In my experience they make the goal "intellectually manageable" sufficiently precise to be actually helpful, in a degree that ranges from "very" to "extremely so".

MAchine learning git
Example missing the SDLC goal and artefacts
😲 The idea of putting a data mining, AI, machine learning (ML) project into git for versioning, will cause big issues.
To get aware of: operational data, training - validation, has the role of source code. There is a goal using some methods to fit the line.
A ML project uses real operational information. The size easily grows above many Gb's, 60Gb and more being smal.
SDLC tools assumptions: Theses assumption are usually a mismatch in SDLC projects.
Bitbucket from Atlassian has repository limit of 1 Gb for the total of all historical versions.

R-1.2.2 Legal SDLC obligations
Regulatory Guidelines
😲 These are well documented by several sources at a high and medium abstraction level. However there is no real pressure for organisations to have those compliant well done. A change is possible coming by new regulations requiring to follow those, Nis2 and Dora (EU).
❗ What is not there: detailed technical implementation instructions with checklists.
❗ What is not place: regulatory audits with corrective controls.
There is intangible safety connection, see: Legal Safety obligations
Guidelines iec/iso 27002 (2013), Release management
The environment should be well secured. Safety measures against accidental (mistakes) or intended (hack) destruction. Those guidelines, mentioned in regulations, are clear.
The intention is to know which software version of each type was used on a moment in time in the production environment. Versioning in development is not mentioned.
Protection, safety.
istqb
Versions used in prodcution at "12.5.1 Installation of software on operational systems":
istqb
Information Quality, impact.
iso 270002 12-5-1b

R-1.2.3 Releases, SDLC, the technology mindset
classic modelv2 lines
Classic centralised life cycle model DTAP
The classic approach is having clear stages for relases with their versions and archiving.
The methodology assumes there is a shared development environment the developers are cooperative working in.
The software library is the central location having all possible future, current, and previous versions of production software.
logo git
The benevolente dictator
In the git reference. Assumptions are the developer is also the operator for his environment. Low level shell commands knowledge is expected.
benevolent-dictator.png Using any releasement tool is always requiring an ultimate approver. git terminology: dictator.
All artifacts are verified on incremental changes when they are merged, there will be blocking issues when something gets out of order.
💣 What is is missing: a simple connection to well defined fucntional acceptance validation.
The distributed life cycle model idea
sdlc_gitwrk_01.png Git uses a local repository personalised locations.
The local files are the daily working activities.
Central managed repository: synchronised to local.
Git was made in 2005, 64Kb modems the norm.
💣 The local methodology assumes a shared development environment for cooperative working is not possible.

Git life cycle model DTAP
nvie git lines Nvie is often referred as a succesfull approach using git. The picture s Git from nvie (2010). In principle the same structure as the central classic.
The focus is at developing software (3gl).
What is not there: Enabling AI, ML (Machine Learning) tools in information flows goes by using multiple stations.
To segregate dedicated stations:
Classic life cycle integrating with recent tools
Example: complete service doing release management.
Following what is asked for technology once there was a blog on integrating technologies. CA, Computer associates was the owning company. Notice the release management flow by lines of the master, parallel development, an emergency fix to bring a release version into production.
It is the developer laptop that is positioned to be the starting point. Another tool, Jenkins, brought in for doing tailored scripted (development) packaging.
ca endevor git

💣 The focus on the goal to achieve for the organisation, release management, is missing.
It is about tools, personal develoeprs preferences. Some Endevor documentation is still available. CA Endevor SCM and Git intergration
Ancient automatization

R-1.3 Lean Agile: Safety perspectives

What is wrong with the safety for a platforms, safety for applications, safety on sensitive information?
What should be: (soll)
  1. Organisation: Safety Accountability
  2. Chief Product Owner (CPO) role ➡ safety
  3. Technology safety insight support as service
💣 However there are many issues. Root causes by misunderstandings, wrong perceptions. The usual for safety is a focus on technology. Needed is a mindset for organisational risk & impact.

R-1.3.1 The safety challenge, cyber administrative
How to review and implement Safety
There are many issues. Root causes by misunderstandings, wrong perceptions on:
🤔 Business intelligence, analytics is not associated with safety, just think of the trying to manage all those content in excel spreadsheets.
Business applications are the usual context everybody is associating to safety, cyber security. These are usual monolithic and siloed with a missing design in business components and what infrastructure is used. There are two types of business components:
  1. Business information, data. The assets that are input and after processing the result.
  2. Business rules, code. How transformations on information are expected to work by processing for the good bad and ugly.
➡ 💣 When this is not seen in the solution it is not about a business application.
Infrastructure is the historical association for information technology.
A separation in concerns:
  1. Platforms, tools, middleware, DBMS systems, ERP systems, file transfers, messaging, web traffic. These have their own approach of release management because there are peculiar dependencies. One of the dependencies is continuity for business applications.
  2. Operating systems the software component for enabling hardware systems. The hardware is more and more virtualised by software.
  3. Network, adapters, line connections, routers, firewalls, segmentations for safety isolations.
Although this seems a logical separation of concerns, there are still a lot of misunderstandings and heated discussions. A platform, dbms is for perspective of the operating system an "application" for a "business application" it is a required infrastructure component.
➡ 💣 When this kind of discussion is seen the expectation to achieve a compliant environment is a mission impossible.
Divide and conquer
😉 Divide a bigger issue into smaller ones to have each of them better understood.
Release management and Safety are closely tied in the core information flows (operational plan) of the organisation. Analysing the core organisational information used in analytical planes are posing an additional challenge for the safety quest. Accepting platforms externally prescribed for usage and safety while the organisation is accountable is a weird additional issue.
enisa secure procurement
Guidelines Administrator roles
Systems services are classified as "high privileged". Security should set by the principle of least privileges.
Administrator roles are classified as "high privileged". Security should set by the principle of least privileges.
An example of a guideline clearly stating what should be done. Indispensable baseline security requirements (Enisa, procurement: secure ICT products and services 2016 )
The provider shall design and pre-configure the product according to the least privilege principle, whereby administrative rights are only used when absolutely necessary, sessions are technically separated and all accounts will be manageable.
😲 The usual idea at customers is this would be in place by suppliers without any validation it was done conform guidelines. The result is a lot of frustration at the orgnsiation security staff not understanding why access right for DevOps are left unnecessary wide open.

R-1.3.2 Legal Safety obligations
Regulatory Guidelines
😲 These are well documented by several sources at a high and medium abstraction level. However there is no real pressure for organisations to have those compliant well done. A change is possible coming by new regulations requiring to follow those, Nis2 and Dora (EU).
❗ What is not there: detailed technical implementation instructions with checklists.
❗ What is not place: regulatory audits with corrective controls.
There is intangible connection with release management, see: Legal SDLC obligations
Guidelines iec/iso 27002 (2013), Privileged access.
The environment should be well secured. Safety measures against accidental (mistakes) or intended (hack) destruction. Those guidelines, mentioned in regulations, are clear.
PIM Privileged identity management, the intention is to know who did some critical administrative action at what moment in the production environment.
iso 270002 9-2-3

R-1.3.3 Safety, cyber, the technology mindset
Securing resources relations is part of the relase train. It are technical resources associated to privileged roles. Safety cyber security, multi tenancy must be in place for information systems.
The organisation, product manager, should be in lead for:
A logical framework for data management -connections
What is missing: a framework for data connections used with defined localisation for the specific situation in an organisation. The goal is simple: No uncontrolled interventions of business information between the DT, A and P environments.
DTAP seregation Data pipeplines technique
In a figure:
Administrators power Any authorisation model (security) made effective at Test should be conform the intended Production. Test, should be as similar as possible conform the intended Production situation. Data connections simulated or active conform the intended Production.
Details on what is needed more for safety depends on used technologies. Questions on how to make the safety life cycle lean is by removing constraints. Steps in responding for improving safety:
Misunderstanding ICT - business lines, shadow ICT
Finance Business - Marketing, sales, customer relations departments are getting the freedom to do their own ICT. Financial reporting is inside information but is sensitive at moments other persons should not know about it. This started with the SOX regulation.
Self service ICT, Cloud services, shadow ICT is often seen as succesful to overcome misunderstandings, but with compliancy consequences: missing almost all required policies.
A culture of everyone having their own machines
🤔 Doing all ICT work on a single or limited number of machines requires good management for access and resource usage on all shared resources. Cloud native benefits from using shared resources. The complexity of the good management is assumed to be solved by the supplier.
🤔 The alternative is having dedicated machines for all of the components of any business application. The complexity in this one is good management of involed interactions between the components in business applications aside those that passing over the businees applications.
To choose which of these complexity challenges to get confronted at what level:
The Service Oriented Architecture: SOA, API-s
In complex environments with many interactions each doing partial actions for the Business, connections are needed for: exchanging data, information, by messages and/or bulk. The complexity of information versions at intereactions is the disadvantage, resulted in an aversion for SOA and the service bus (2010).
Safety attention points:
The common confusion is not seesing all these components as topics on their own and on top holistic.
Classic life cycle safety technology focus
Soc, Security operations Center and Computer Operations once started with RACF, SMF System performance, system sizing. If you want to perform library change analysis, you also need CKFREEZE data sets with checksum information.
baisc SoC CSI
An ancient figure of the 80´s.

Collecting all lind of resources.
Logs are shared for different goals.

The way of working in principle did not change. The modern product: IBM QRadar Suite is a modernized threat detection and response solution designed to unify the security analyst experience and accelerate their speed across the full incident lifecycle. The portfolio is embedded with enterprise-grade AI and automation to dramatically increase analyst productivity, helping resource-strained security teams work more effectively across core technologies. Integrated products for: Endpoint security (EDR, MDR), SIEM, SOAR. (2024)
order in logic

R-1.4 Information processing functionality

In the beginning value streams were managed by humans at the floor. At some moment this capability got lost.
To achieve by transformations: The association in this recovering what once was done well although hardly noticed.
R-1.4.1 Information what is it about?
The information factory
The mindset for a circular flow, using a value stream must always have been in my mind. The operational plane, similar to a factory: JST JCL values stream politics impact
Pull: 0 ,1,2,3
Demand request

Push: 4,5,6,7,8,9
Delivery result

Value stream materials: Left to right

See right side:
The analytical plane: similar to a factory.
The EDWH 3.0 Logistics as basic central pattern.
Having a inbound area the validation of goods, information, is done. At the manufacturing side, outbound, are the internal organisational consumers.
Note: Not only for a dashboard to be used by managers but all kind of consumers including operational lines are covered.
df_csd01.jpg
The two vertical lines are managing who has access to what kind of data, authorized by data owner, registered data consumers, monitored and controlled.
The confidentiality and integrity steps are not bypassed with JIT (lambda).
The word datacontracts is applicable for this. It is not something being reserved for only reporting purposes (BI AI).
💣 The EDW 3.0 is holistic at enterprise level, it covers operational value stream and others controllng what is coming in and what is going out. This is a disruptive not usual viewpoint

dual feeling
ETL ELT classic decoupling in the modern times
ETL vs ELT: Decoupling ETL Traditional ETL might be considered a bottleneck, but that doesn't mean it's invaluable. The same basic challenges that ETL tools and processes were designed to solve still exist, even if many of the surrounding factors have changed. For example, at a fundamental level, organizations still need to extract (E) data from legacy systems and load (L) it into their data lake. And they still need to transform (T) that data for use in analytics projects."ETL" work needs to get done but what can change is the order in which it is achieved and new technologies that can support this work.
💣 ELT ETL as standard patterns is a disruptive not usual viewpoint
🚧 Retrospectives and corrective actions are needed.
👐 Goals, criteria: solving the real bottlenecks hampering the organisation.

R-1.4.2 Operational plane - value streams
The ER-star diagram - predictable planned process
Ramping up the administrative information line, an administrative process, is similar to the physical material flow. A reason to process something (trigger) is starting the value chain.
The pull & push:
Information-Flow-Arrow - request kanban Pull kanban: the request with all necessary preparations and validations.
Part 1: Preparation Designing a kanban system on paper is much easier than implementing it on the shop floor.
Material-Flow-Arrow - delivery Push delivery: with all necessary quality checks tracked by a kanban.
How do you organize the waiting of the kanban for processing? It should be a first-in, first-out system, ..
ramp-up-2 Now that all material is “kanbanized,” you have to reduce material. .... Overall, this debugging process will also help you with the "check" and "act" of the PDCA sequence. If you do this debugging, you will learn if the system actually works and if it is (hopefully) better than what you had before. Don´t take it for granted that just because you changed something, it must be better than before!

The ER-star diagram (Entity Relationship)
Grouping features around an element is the standard for operational information processing. Normalising the information structure has the goal of avoiding any duplicates so there is a certainty on what the correct version of stored information is. Third normal form
Star model - AlC type3
Advantages and disadvantages are:
Complicated data models & access plans.
Standard well known methodologies.
Required: decoupling for operational-analytical.
Consuming information requires a transformation.


Chain Historicals: Operational Plane
There are a lot of differences it is more an ER-model similar to the dwh starschema. 🚧 Retrospectives and corrective actions are needed.
👐 Goals, criteria: solving the real bottlenecks hampering the organisation.

R-1.4.3 Analytical Plane - improving flows
Datastorming EL-T
Becoming data driven, agile thinking 💡
In a data driven approach there is cycle around the engineer data analist.
Decoupling ETL
A clean separation between data movement and data preparation also comes with its own specific benefits:
Data driven, machine learning, AI.
The labelling of data is indicating of a denormalized tabular format is the driver with Machine learning. The high normalised data approach is abandoned.
Star model - Flatten
Data preparation for machine learning still requires humans
Most enterprise data is not ready to be used by machine learning applications and requires significant effort in preparation. ... For supervised machine learning to work, the algorithms need to be trained on data that has been labelled with whatever information the model needs.
A disruption with the modelling for operational information, avoiding duplications.
Analyse culture: exploratory framework
A lost figure but nicely showing what should be done.
Analyse culture: exploratory framework
Complicated: the fundamental question is what is needed for who for decisions.
Other complications:
Sense making og collected sources, which ones?
supporting decisions at what level of certainty?
supporting decisions at what time horizon?

🚧 Retrospectives and corrective actions are needed.
👐 Goals, criteria: solving the real bottlenecks hampering the organisation.

R-1.4.4 Research using information
Research is more time consuming and requiring a lot of human intelligence, interactions. This one is very likely a root cause for confusing in analytics, resarch. Understanding the information for all involved customers and other interactions was the research area for marketing. Some examples: This is not unusual to do and part of normal business. When doen too excessive and without controls there are issues.
💣 Adding a research type decoupled from the analytcial plane, is not a common viewpoint
🚧 Retrospectives and corrective actions are needed.
👐 Goals, criteria: solving the real bottlenecks hampering the organisation.

ancinet computer

R-1.5 Master Data: Communication Cooperation

Data governance, a shared glossary, at information processing is avoided. The idea is the base for knowledge understanding is just a cost factor. In fact the enabler for: Changing the avoidance for data management requires becoming aware what went wrong.

R-1.5.1 Changing the wrong thing: "developer problem"
Ambiguity, the wrong problem
"twice the work in half the time." is a well known dogma for creating faster and code. It appealed to execs unhappy with what they were getting for their tech spend. (Mike Goitain 2024)
Scrum was never going to work in many orgs because they didn't have a "developer problem" to fix in the first place. We got stuck treating superficial symptoms instead of the underlying causes. The real problems lay in the flawed legacy mental model that: Fixing this will require:
Ambiguity, Misunderstanding by words
In the context of information retrieval a thesaurus is a controlled vocabulary that seeks to dictate semantic manifestations of metadata in the indexing of content objects. A thesaurus serves to minimise semantic ambiguity by ensuring uniformity and consistency in the storage and retrieval of the manifestations of content objects.
Composed by at least three elements:
  1. 1-a list of words (or terms)
  2. 2-the relationship amongst the words (or terms), indicated by their hierarchical relative position (e.g. parent/broader term; child/narrower term, synonym, etc.)
  3. 3-a set of rules on how to use the thesaurus.
There are standards for this, iso 25964.
🚧 Retrospectives and corrective actions are needed.
👐 Goals, criteria: solving the real bottlenecks hampering the organisation.

R-1.5.2 Focus on the wrong thing: "developer interests"
Tools - in this case programming languages is a mindset lock in by technology. Even than the most modern of today is not the same tomorrow or is it? Rethinking the "Real" Programming Languages: A Look at the Most-Used Technologies Today (Fernando Ferrer 2024)
In the ever-evolving landscape of technology, the tools and languages we choose to learn and use are often driven by trends, peer opinions, and a constantly shifting set of industry best practices. Yet, as we look at the current state of the programming world, a fascinating contradiction has emerged: the languages once dismissed as “not real” or “too slow” are now at the forefront of technology’s most significant advancements.
The Rise of Python and SQL: From Underdogs to Essential Skills
A decade ago, conversations among developers often echoed a common sentiment: SQL was merely a query language, unworthy of being considered alongside its more robust and versatile counterparts, and Python was labeled as a slow, toy language, suitable only for small scripts and academic exercises. Fast forward to today, and both SQL and Python have cemented their positions as two of the most indispensable languages in the modern tech ecosystem.
SQL: The Backbone of Data Management
SQL, the Structured Query Language, has transcended its origins as a simple tool for querying databases. It is now the backbone of data management and analysis across industries. In an era where data is king, the ability to efficiently extract, manipulate, and analyze large datasets is crucial. SQL’s declarative syntax and powerful query capabilities make it the go-to language for data professionals.
More than 60% of data analysts, data scientists, and business intelligence professionals use SQL daily to interact with data warehouses, run complex analyses, and support decision-making processes. Despite its straightforward appearance, SQL’s power and flexibility are unparalleled, enabling everything from simple data retrieval to intricate transformations that fuel the insights driving business strategies.
Mentioning SQL as powerfull elminiates the age of tool argument. For that the not seen as modern Cobol language is a similar one for discussions.
Python: The Language of Automation, Data, and Beyond
Python's journey from a perceived “slow” language to one of the most popular programming languages worldwide is nothing short of remarkable. Today, Python’s versatility makes it a favorite among developers, data scientists, and machine learning practitioners. Its simplicity and readability lower the barrier to entry for beginners while offering robust libraries and frameworks for advanced users.
In the fields of data science and machine learning, Python is the undisputed leader. Libraries such as Pandas, NumPy, and Scikit-Learn have made it the language of choice for data manipulation, statistical analysis, and algorithm development. Meanwhile, frameworks like Django and Flask have empowered web developers to build scalable applications quickly.
Despite criticisms of its speed, Python’s flexibility and extensive ecosystem have enabled it to dominate areas where productivity and ease of use are more critical than raw performance.
The "Not-So-Real" Languages Leading the Pack
Interestingly, when we look at the list of the most-used programming languages today, the top two spots are occupied by JavaScript and HTML. These are the very technologies often dismissed by purists as “not real programming languages.”
JavaScript: The King of the Web
JavaScript’s evolution from a simple scripting language for web browsers to a full-fledged, multi-paradigm programming language has been nothing short of transformative. Once regarded as a tool for adding trivial interactions to websites, JavaScript is now the cornerstone of modern web development, powering everything from front-end frameworks like React and Angular to server-side environments like Node.js. JavaScript’s ubiquity and versatility have made it the most popular language among developers. It is the language of the web, used in everything from building interactive websites and web applications to creating server-side logic and even mobile apps through frameworks like React Native.
HTML: The Language that Shapes the Web
HTML, or HyperText Markup Language, is another so-called “not-real” language that dominates the development landscape. While it may lack the complexity of traditional programming languages, HTML is fundamental to the structure and presentation of web content. It forms the skeleton of every web page, defining the structure, layout, and elements that users interact with. Without HTML, the web as we know it would not exist. Its simplicity is its strength, allowing developers to create accessible, well-structured content that can be rendered across devices and platforms.
What This Means for Developers
The lesson here is clear: what makes a programming language "real" or "valuable" is not its complexity or speed but its utility in solving problems. The most used programming languages today are not necessarily the ones considered the most powerful or efficient; they are the ones that enable developers to build solutions, extract insights, and create value.
As technology professionals, we must move beyond the notion of what is considered a “real” programming language and focus on the practical applications and impact of these tools.
👉🏾 Learning SQL or Python might not make you a hardcore systems programmer, but it will make you an invaluable asset in a world that increasingly relies on data-driven decision-making and automation.
👉🏾 Similarly, dismissing JavaScript and HTML as mere scripting tools overlooks their central role in shaping the web. The ability to create interactive, dynamic, and user-friendly web experiences is essential in a digital world where first impressions are often made online.
Looking Forward
In conclusion, the languages we use are tools, and like any tool, their value lies in how effectively they help us solve problems. SQL, Python, JavaScript, and HTML have proven their worth in diverse contexts, from data analysis and automation to web development and user experience design.
As the tech landscape continues to evolve, so too will the languages and tools we rely on. The key for developers is not to be bound by preconceived notions of what constitutes a “real” language but to remain adaptable, open-minded, and focused on solving real-world problems with the best tools available.
After all, the most valuable programming languages are not those that conform to arbitrary definitions of legitimacy but those that get the job done.
🚧 Retrospectives and corrective actions are needed.
👐 Goals, criteria: solving the real bottlenecks hampering the organisation.

Systems engineering
R-1.5.3 Missed: Why Systems engineering
At another level is rethinking services at the system level. Rethinking systems to improve. What Can We Learn From Systems Engineering? (Glen Alleman 2024)
The Lean Aerospace Initiative and the Lean Aerospace Initiative Consortium define processes applicable in many domains for applying lean. At first glance, there is no natural connection between Lean and System Engineering. The ideas below are from a paper I gave at a Lean conference.
👉🏾 Key Takeaways
Core Concepts of Systems Engineering
Capture and understand the requirements for Capabilities assessed through Measures of Effectiveness (MOE) and Measures of Performance (MOP). Could you ensure requirements are consistent with what is predicted to be possible in a solution in these MOEs and MPs?
  1. Treat goals as desired characteristics for what may not be possible.
  2. Define the MOE, MOP, goals, and solutions for the project's whole lifecycle in units meaningful to the buyer.
Could you distinguish between the statement of the problem and the description of the solution?
Could you identify descriptions of alternative solutions?
Develop descriptions of the solution.
  1. Baseline each statement of the problem and the statement of the solution.
    Except for simple problems, develop a logical solution description.
  2. Be prepared to iterate in design to drive up effectiveness.
  3. Base the solution of evaluating its effectiveness in units of measure meaningful to the buyer.
Independently verify all work products.
  1. Validate all work products from the perspective of the stakeholders.
Management needs to plan and implement effective and efficient transformation of requirements and goals into a solution description.
Typical System Engineering Activities
  1. Technical management
  2. System design
  3. Product realization
  4. Product control, Process control
  5. Technical analysis and evaluation
  6. Post-implementation support
Steps to Lean Thinking [2]
  1. Specify value
  2. Identify value stream
  3. Make value flow continuously
  4. Let customers pull value
    Pursue perfection
Differences and Similarities between Lean and Systems Engineering
  1. Both emerged from practice. Only later were the principles and theories codified.
  2. Both have focused on different phases of the product lifecycle. SE is generally focused on product development and more focused on planning. Lean is generally focused on product production and more focused on empirical action.
  3. Unlike Lean, SE focuses less on quality, except for Integrated Product and Product Development (IPPD).
Despite these differences and similarities, both Lean and Systems Engineering are focused on the same objective: delivering products or lifecycle value to the stakeholders. The lifecycle value drives both paradigms and must drive any other process paradigm associated with Lean and Systems Engineering, including paradigms like software development, project management, and the very notion of agile. A critical understanding often missed is that Lifecycle Value includes the cost of delivering that value. Value can't be determined in the absence of knowing the cost. ROI and Microeconomics of decision making require both variables to be used to make decisions.
👉🏾 What do we mean by lifecycle?
Generally, lifecycle combines product performance, quality, cost, and fulfillment of the buyer's needed capabilities.[3] Lean and Systems Engineering share this common goal—the more complex the system, the more contribution there is from Lean and SE.
👉🏾 Putting Lean and Systems Engineering Together on Real Projects
First, some success factors in complex projects [4]
  1. Dedicated and stable interdisciplinary teams
  2. Use of prototypes and models to generate tradeoffs
  3. Prioritizing product features
  4. Engagement with senior management and customers at every point in the project
  5. Some form of high-performing front-end decision process that reduces the instability of key inputs and improves the flow of work throughout the product lifecycle.
This last success factor is core to any complex environment, no matter the process. Without stability of requirements and funding, improvements to workflow are constrained.
Adapting to changing requirements is not the same as making the requirements—and the associated funding—unstable. Mapping the Value Stream to the work process requires some level of stability. Systems Engineering, as a paradigm, adds measurable value to any Lean initiative by searching for this stability. The standardization and commonality of processes across complex systems are the basis for this value. [5]
👉🏾 Conclusions
Lean and SE are two sides of the same coin regarding creating value for the stakeholder.
Lean and SE complement each other during different project phases – ideation, product trades for SE, and production waste removal for Lean anchor both ends of the spectrum of improvement opportunities.
Value stream thinking makes the paths to transition to a Lean paradigm visible while maintaining the systems engineering principles. [6]
The result is the combination of Speed and Robustness – systems are easily adaptable to change while maintaining fewer surprises, using leading indicators to make decisions, and decreasing sensitivity to production and use variables.
🚧 Retrospectives and corrective actions are needed.
👐 Goals, criteria: solving the real bottlenecks hampering the organisation.

Confused-2

R-1.6 Maturity 3: ICT service impact understood

From the three ICT, ITC interrelated scopes: Only having the focus on IT4IT, getting a mature Life Cycle Management (LCM) requires understanding an acknowledgment of the layered structure.
Each layer has his own dedicated characteristics.

R-1.6.1 Historical lockin release management
The beginning of release management
Mainframe usage: there were several approaches in use, not a single one covering everything.
Those were: It are the same challenges with just some other technology these days.
😱 ❌ Personal experience, reviewed tool: (Panvalet). It got rejected for having too little additional value. In house scripting more efficient, effectieve, more reliable.
Computer Associates Panvalet (also known as CA-Panvalet) is a revision control and source code management system for mainframe computers such as the IBM System z and IBM System/370 running the z/OS and z/VSE operating systems.
😱 💰 Personal experience, Endevor, the last mainframe only tool. It was a sad experience because management forced to have an external automated tool for the complicated situations but not realising the impact, involved cost.
Endevor is a source code management and release management tool for mainframe computers running z/OS. It is part of a family of administration tools by, which is used to maintain software applications and track their versions.
👁❗ Focus on external opinions, external technoglogy is the same, seen everywhere.
Failing in qualtity, time in deliveries and cost.

The project triangle at Sofware Development life Cycles
Project managament devil triangle Wanting artefacts deployed, ⬅
💣 forgot the goal at production with the artefacts quality (not good).
Wanting deployed into production, ⬅
💣 forgetting to have selected verified well functional artefacts (not cheap).
Wanting at production artefacts, ⬅
💣 forgetting deployment (not fast).

CI Continuous Integratation / CD Continous Delivery
What is the idea for CI / CD a tecnology goal where the business goal is not mentioned.
A post by Sten Pittet (describing what is going on:
Atlassian
CI and CD are two acronyms that are often mentioned when people talk about modern development practices. CI is straightforward and stands for continuous integration, a practice that focuses on making preparing a release easier. But CD can either mean continuous delivery or continuous deployment, and while those two practices have a lot in common, they also have a significant difference that can have critical consequences for a business.
CI CD atlassian bitbucket
Developers practicing continuous integration merge their changes back to the main branch as often as possible. The developers changes are validated by creating a build and running automated tests against the build.
By doing so, you avoid the integration hell that usually happens when people wait for release day to merge their changes into the release branch.
👁❗ Wait ..., by avoiding a shared development environment there is a complete technology introduced with external technology for achieving a shared development environment.
That sounds to be a lot of waste is being introduced.
Continuous integration puts a great emphasis on testing automation to check that the application is not broken whenever new commits are integrated into the main branch.
👁❗ There is more: getting to that shared environment, the result of CI, there is no notion how to do quality testing: program/component test, integration test, acceptance test. That is not conforming legal and or compliance requirements mentioning the level of testing with organisational mission goals.

Continuous delivery is an extension of continuous integration to make sure that you can release new changes to your customers quickly in a sustainable way. This means that on top of having automated your testing, you also have automated your release process and you can deploy your application at any point of time by clicking on a button.
👁❗ Why is the business not involved in acceptance and should wait to see it when it has deployed into production? Many business applications have an agreed release date for business with good reasons.
👁❗ Just adding some products in a web shop should not done by changing logic but by changing business data.

R-1.6.2 Historical lockin safety management
In house build safety API-s to centralised systems
In the 80-s information technology was very new. In that era there were no centralised security systems or with very limited functionality. The best next approach were in house build dedicated API-s for safety cyber security. The first change was removing hard coded logic from business applications in favor of commercial tools that got available.
🤔 The first commercial tools: ACF2 hierachical, RACF groups, AD LDAP.
No matter what commercial product is used it will lacking support for complex situations eg in signing sensitive approvals for multiple persons on dedicated conditions.
👁❗ Focus on external opinions, external technology is the same, seen everywhere.
Failing in qualtity, time in deliveries and cost.

Descriptive, diagnostic, predictive, proactive, prescriptive
A hopefull start to become process oriented for safety, cyber security.
mcafee siem analytics
advanced security analytics
Trellix formerly McAfee (2024).
mcAfee is consolidated with others.
SOAPA ( Oltsik 2016) Enterprise security operations and analytics requirements are forcing rapid consolidation into something new that ESG calls a security operations and analytics platform architecture (SOAPA) A new name again: SOAR (security orchestration, automation and response) SOAR (security orchestration, automation and response) is a stack of compatible software programs that enables an organization to collect data about cybersecurity threats and respond to security events with little or no human assistance. The SOAPA SOAR difference: SOAPA vs SOAR As security guru Bruce Schneier would say, “security is a process, not a product.” Similarly, the SOAR term focuses on the technology directions of security operations processes rather than the processes themselves.
👁❗ Commercial products sadly are technology driven oriented, ignoring the process.

Components as topics on their own and on top holistic
There are a lot of safety attention points:
Confusing and blocking are: 👁❗ Ignoring the safety process, requirements, is common by attention-grabbing technology.
👁❗ Of the long list of safety attenion points a very small part has some attention.

cynefin
Removing bottlenecks, continous flows
Integrations of object, deployment of business applications, how to align with compliancy rules at the business perspective? That is a topic that is complicated situation.
cynefin The complicated domain consists of the "known unknowns". The relationship between cause and effect requires analysis or expertise; there are a range of right answers. The framework recommends "sense analyze respond": assess the facts, analyze, and apply the appropriate good operating practice.

R-1.6.3 Historical lockin information flows
There are several types of information processing, there is no strategy or vision in place at the moment. Information flow by type & goal: 👁❗ A cultural change is required to solve these kind of gaps.

R-1.6.4 Historical lockin master data
For master data understanding the goal is understanding each other and understanding what is going on. Nothing is in in place at the moment, anything would be an improvement.
A limited but fundamental shortlist: 👁❗ A cultural change is required to solve these kind of gaps.

The triangle Operations Technology, Business Algorithms
Security at transit devil triangle Having technology and algorithms, ⬅
💣 forgetting how to run those operational well secured.
Defining algorithms run operational, ⬅
💣 failing to have well selected appropiated generic technology.
Getting technology for operations, ⬅
💣 forgetting the goal of algorithms in business applications.
🔰 Contents E_SDLC  C_SAFE  E_Inform C_Mdata CMM0-4IT 🔰
  
🚧  👁-GAP F-SDLC F_SAFE F_Inform F_Mdata CMM3-4IT 🚧
  
🎯 ✅-GAP CI-SDLC CI-SAFE CI_Inform CI_Mdata CMM5-4IT 🎯


R-2 ICT service gaps Understanding: getting them solved


feel_brains_05

R-2.1 Seeing ICT Service Gap types

Applications are business organisational artifacts served by technology. Business rules, business logic, are set by the organisation. Service gaps are in each of those four areas.
Using solutions to solve the service gaps.
R-2.1.1 Deducing the reason for ICT service gaps
Desing of the information flow, assembly line
It is not only blindly following instructions managing the product process flow. The value stream has mandatory requirements to be fulfilled to be shown for the products in the portfolio. During design and validation of the product they should get materialized. As long the product is relevant, that is customers are using it or are able to refer it, the information of the portfolio product for dedicated version should be at least retrievable.
To get covered by information knowledge by a portfolio: Jabes process Assurance
In a figure:
See right side.
Artifical Intelligence obligations
AI products An AI product is a software or hardware solution that integrates artificial intelligence technologies to automate tasks, support decision-making, and enhance functionalities. Using AI (machine learning or deep learning algorithms), an AI product involves solving specific problems, optimising processes, or providing intelligent insights based on using data to improve the process aspiring to achieve human level ability.
It is a interesting idea to see AI as a product to deliver to the ones that do the real information flow processing. A product like the well known ANPR (Automatic number-plate recognition) software to deliver as a tool to others, similar to a drilling milling or lacquering unit, in their information flows.
There are several concepts: A lot of AI resistance in getting too much mistakes with misunderstandings and the missing safe state and correction options for mistakes.
The four challenges to needed to get solved that are structural related to a portfolio.
data driven BI&A
The SIAR model is the highest abstraction of processes in many dimensions. With four stages in four quadrants the holistic overview is placed in the middle. In the highest abstraction the middle (center) is symbolised an eye.
An intermediate of the SIAR abstraction:
9 plane BI&A panopticon
A figure:
See right side

S South: Situation, Steer
I West: Input, Ideas
A North: Actions, Analyse
R East: Result, Request

Proces flow value stream.
An early SIAR figure for process flow. It is full with colours, blue is for the operational process flow, green for the assembly manufacturing and yellow for the control (pull).
The process of engineering an enterprise operatinal system. SIAR an alternative in another materialisation of two dual components with the well known PDCA, DMAIC OODA cycles.
data valuestream
See right side
1 Identify customer value.
📚 2 Map the value stream.
3 Design logical Flow.
4 Establish Pull request. IV - III
4 Implement Push delivery I - II
🎭 5 Seek Perfection.

R-2.1.2 The Administrative Cyber System Life Cycle
Burning houses
Rationale
The SDLC, system development life cycle, enables: When this activity is not well understood or not well controlled is is easily gets into a garbage mess.
Implications
There are conflicts in accountabilities responsibilities: Historical grown ideas following just technical hypes, are blocking factors to do this well.
R-2.1.3 Safety at Administrative Cyber Systems
Burning houses
Rationale
Safety, cyber security, is technology enabling: When this activity is not well understood or not well controlled is is easily gets into a garbage mess.
Implications
There are conflicts in accountabilities responsibilities: Historical grown ideas following just technical hypes, are blocking factors to do this well.
R-2.1.4 Administrative Cyber Systems, the Operational Plane
artifacts in process patterns
The visible materialized data, information representations:
  1. Extract and load materials into a Landing area
  2. Validate the material at landing placing them into Staging
  3. Prepare Staging for transformation processing at Semantic
  4. Deliver transformations results into Databank
Between those data materialisations there are processing activities.
Information flow, using closed loops
Monitoring what is going on, closed loops on operational flows should be in place. When there is a high vital product data an additional flow evaluating new options before changing process flows is needed. In a figure: Process informationflow
Process change control by four lines
Changing process flows is done by changing orchestrated four dependent process activities in the standard pattern. In a figure: Process processlcm
Burning fire
Rationale
Information processing using static representation in a flow with process changes for other artifacts is a mindset. Not giving this the needed attention is a high risk for the organisation.
Implications
There are no conflicts in accountabilities responsibilities. Historical grown ideas following just technical hypes, are blocking factors to do this well.
R-2.1.5 Master Data, Communication: Administrative Cyber Systems
Burning fire
Rationale
Master data, communication is enabling the organisation: Not giving this the needed attention is a blocking factor for all activities.
Implications
There are no conflicts in accountabilities responsibilities. Historical grown ideas following just technical hypes, are blocking factors to do this well.
on prem datacenter

R-2.2 Solving: The ICT-SDLC challenge

Applications are business organisational artifacts served by technology. Business rules, business logic, are set by the organisation. Methodlogies to follow by technology are: Applicable at: Operations, tactical, strategy.
Intention: improving quality, quantity at lower cost.

R-2.2.1 System Life Cycles at multiple layers
The full System (ICT) Business Pyramid.
There are layers for an functional organisational technical solution. Multiple business units (verticals, tenants) with each multiple products coexist in a cooperative environment.
Business applications served by a multiple tenants philosophy:
  1. Value streams, Business logic: code, instructions, in flows and life cycles (SDLC).
  2. Materials, Business data: information - metadata at several stages.
  3. Closed loops (lean): Dashboards, reports, analytics on operations with life cycles.
Tools, middleware, platforms, supporting "Business application" as a service.
⚠ Interactions by several tools are very likely.
Every tool middleware, platform subject to multiple lifecycles (SDLC):
  1. Value streams supporting business: dedicated segregated configurations.
  2. Tools, middleware, platforms being out of the box software.
  3. Infrastructure: Configuration and settings for the tool interacting with the operating system and hardware (datacentre), the cloud.
  4. Closed loops (lean): Dashboards, reports, analytics optimizing operational behaviour.
Infrastructure, Datacentre, the cloud, a service enabling tools, middleware, platforms:
  1. Operating System (software)
  2. hardware
  3. Internal / external network connections
  4. Basic central security provisions
  5. Closed loops (lean): dashboards, reports, analytics optimizing operational behaviour.
layers - infrastructure bottom - business top.
In a figure the layered pyramidal structure,
See right side:
Anything is "data": Software, tools, materials, business data, business logic, operating system, network is technical data. Only the materials business data is information at value streams.
💣 Anything is "data" is too volatile uncertain complex ambiguous.
feel dual confused
R-2.2.2 LCM Basics for Business Applications
ALC Application Life Cycle principles
Doing release management is about promoting artifacts or more complex objects (entities). Knowing what it is technically and logically about, is a prerequisite. From an old document valid high abstracted questions.
In order to successfully move an entity from one environment to another, a number of key questions must first be addressed: 👉🏾 This abstraction is technology agnositic, valid for a lot of development types.
ALC-v2 Software Development lines
In the classic application life cycle mangement (ALC-v2) the focus is on programs, software that can be run (executed). All artifacts in scope are stored archived in a "Software Library".
The promotion is als follows: Additional non functional requirements: What is defined is: Artifacts components that are unique for content to a dedcicated environment should get minimized. These are blocking components and risks in release mangement, life cycles.
With parallel development it gets more complicated. What is changed in one line must possible backpropagated in others. The devil is in the details of possible backpropagation.
An explanation attempt,
See left side:
(video - controls)

Emergency fix: quick
Parallels: awaiting
Master: normal time

👉🏾 This abstraction is technology agnositic, valid for a lot of development types.
The decision for updates whether they are to propagated to other lines or are obsolete can't be an automated decision. Only the involved developers can know and should decide on what to do.
The decision for propagation in what line and what to merge can't be an automated decision.
  1. Lieutenants are coordinating the work of developers.
  2. Dictators are coordinating the work of lieutenants.
  3. Customers the real emperrors are deciding on the work by dictators.
👉🏾 This abstraction is technology agnositic, valid for a lot of development types.

ALC-v2 Software Development lines.
The distribution to involved machines other than the one being the source. An additonal Acceptance and multiple production machines is just an example.

In a video:
See left side:
(video - controls)

👉🏾 This abstraction is technology agnositic, valid for a lot of development types.

Artifacts components that have the attribute of virtual reuse of another artifact.
Advantages, removes all complexity : Disadvantages : An explanation of the difference what is seen and what is really physical present.

Three possible scenarios

A video,
See left side:
(video - controls)

👉🏾 This abstraction is technology agnositic, valid for a lot of development types.

R-2.2.3 Advanced LCM topics for Business Applications
ALC-v2 Embedded Artifacts: Database schemas, scheduling, metadata
Exporting metadata from one dictionary (database) of any type to another is the solution for release management. Adding related documentation to the portfolio is the finishing touch.
three layers building on the previous production version.
Dedicated tools are needed to export and import the artefact.

In a figure,
See right side:

👉🏾 This abstraction is technology agnositic, valid for a lot of development types.

feel dual confused
ALC-v3 value stream flows.
The ALC-V3 (See R-2.1.4) Does a split of the flow in four steps. Each of the four lines follows the ALC-v2 constructs. What is added a dependency in time-constraints: The disposal goes in a reversed order.
👉🏾 This abstraction is technology agnositic, valid for a lot of development types.

Release management, deliveries, connections in flows.
Next level: Using AI building block, exchanging a component to other flows.

🚧 Retrospectives and corrective actions are needed.
👐 Goals, criteria: solving the real bottlenecks hampering the organisation.

Enterprise platsform

R-2.3 Solving: Safety perspectives

Components (tools) purchased, middelware: Intention: enabling a safe environment holistic.
R-2.3.1 Safety perspectives within the organisation
Who is securing information by roles relations (I)?
The layers by technical provisions:
  1. Business applications served by a multiple tenants philosophy.
  2. Tools, middleware, platforms, supporting "Business application" as a service.
  3. Infrastructure, Datacentre, the cloud, a service enabling tools, middleware, platforms.
All three needed to be covered by safety, administrative cyber security controls.
The most well known activitiy is assigning natural persons, staff, to roles in the security administration. This is a just a partial view on the landscape of all artifacts, objects to get secured.
⚠ Only able to see assigning security roles is a fundamental threat for the organisation.
💣 Safety, cyber security, manages volatile uncertain complex ambiguous situations.

Who is securing information by roles relations (II)?
The easy way is using the HR department for asigning roles. The result of this is a lot of issues at platforms and infrastructure that are propagating into business applications.
security adding to release management
Devils triangle security

See figure right side


Role based Access Control: Keys, accounts, groups, security identifiers I
Even more confusing is that in this triangle by responsibilities is: there are at least four technical subjects to manage.
Role based Access Control: Keys, accounts, groups, security identifiers II
The complexity in dependencies by logic and the related technical implementations is overwhelming. Techncial alerts are complicating the landscape further.
secure_relate03.jpg
Another devils triangle security

See figure right side


Role based Access Control: Keys, accounts, groups, security identifiers III
The functional and technical challenges are extending into organisational ones.

Safety on materialised information (at rest)
Wat looks simple is in reality terrible complicated. All information in one of the DT,A,P environments should be in a same security context segment for operational usage. When there is an analytical plane in place controlled managed gateways should be in place. The rules for business information are not the same for business rules (code logic).
business data - connections
Another devils triangle security

⚠ These are real business information assets to manage.

See figure right side


Operational execution, operators
The operators (production), acceptance testing, system & program testing have anothere type of needed accesss. Privileged acounts are to get implemented for safety on logic, code software, and information.
Operations business data, logic - connections
Another devils triangle security

⚠ These are real business information assets to manage.

See figure left side


R-2.3.2 Safety perspectives combined to release management
istqb
ISTQB - Test quality an ICT specialist
The capabilities for software testing is standardized with vision. Safety is an indespensible part of testing. Certifications and an international organization.
ISTQB® was established in 1998 and its Certified Tester scheme has grown to be the leading software testing certification scheme worldwide.
The ISTQB Agile Test Leadership at Scale (syllabus) connecting lean. Source: LEI (Lean Enterprise Institute).
❗ There are two types of values streams!
A value stream is a concept that originates in lean management. Value streams are groups or collections of working steps, including the people and systems that they operate, as well as the information and the materials used in the working steps. In value-driven organizations, quality and testing roles help to optimize the whole value stream, not just testing.
There are two typical types of value streams: operational and development.
👁 Operational value streams are all the steps and people required to bring a product from order to delivery (LEI, no date).
👁 Development value streams take a product from concept to market launch (LEI, no date).
Key aspects of value streams are to understand the lean concepts of flow and of waste (non-value-adding activities).

Safety practice - Testing validating with quality
There are guidelines to do seperations in activity lines (iso/iec 27002):
12.1.4 Separation of development, testing and operational environments Development, testing, and operational environments should be separated to reduce the risks of unauthorized access or changes to the operational environment. three layers building on the previous production version.

R-2.3.3 Safety perspectives privileged accounts
feel unave cia
PIM privileged identity, PAM privileged access
Isolating privileged using NPA-s (non personal accounts).
Avoiding: PAM pop, principles of operation (microsoft) Privileged Access Management keeps administrative access separate from day-to-day user accounts using a separate forest.
The PAM approach provided by MIM PAM is not recommended for new deployments in Internet-connected environments. MIM PAM is intended to be used in a custom architecture for isolated AD environments where Internet access is not available, where this configuration is required by regulation, or in high impact isolated environments like offline research laboratories and disconnected operational technology or supervisory control and data acquisition environments.
👁 Entra Privileged Identity Management (PIM) is a service in Microsoft Entra ID that enables you to manage, control, and monitor access to important resources in your organization. These resources include resources in Microsoft Entra ID, Azure, and other Microsoft Online Services such as Microsoft 365 or Microsoft Intune.
Organizations want to minimize the number of people who have access to secure information or resources, because that reduces the chance of 👁 However, users still need to carry out privileged operations in Microsoft Entra ID, Azure, Microsoft 365, or SaaS apps. Organizations can give users just-in-time privileged access to Azure and Microsoft Entra resources and can oversee what those users are doing with their privileged access.
Not good: Using Privileged Identity Management requires licenses.
Privileged Identity Management provides time-based and approval-based role activation to mitigate the risks of excessive, unnecessary, or misused access permissions on resources that you care about. Here are some of the key features of Privileged Identity Management:
  • Provide just-in-time privileged access to Microsoft Entra ID and Azure resources
  • Assign time-bound access to resources using start and end dates
  • Require approval to activate privileged roles
  • Enforce multifactor authentication to activate any role
  • Use justification to understand why users activate
  • Get notifications when privileged roles are activated
  • Conduct access reviews to ensure users still need roles
  • Download audit history for internal or external audit
  • Prevents removal of the last active Global Administrator and Privileged Role Administrator role assignments

  • 👉🏾 Abstractions for mentioned technologies for the goal become technology agnositic.

    🚧 Retrospectives and corrective actions are needed.
    👐 Goals, criteria: solving the real bottlenecks hampering the organisation.

    feel enterprise new areas

    R-2.4 Solving: Historical Information, risk & impacts

    Compliancy questions are applicable everywhere internal and external for an organisation. Although this is the technical pillar representative roles to the ones in the organisational pillar are needed.
    Support for the organisational:
    Similarity using the SIAR model holistic and at the technical pillar is intended.
     legal
    R-2.4.1 Data / Information Governance: Safety
    Generic process flow model
    Every process has some input and output / results to deliver. For processing The components at rest and the information in transit going through several steps. There is a important difference at D Develop being the only line developing new logic. The Test Acceptance Production (operations) very similar.
    In a figure:
    releasemanagement tools technique
    A shadow Production or Acceptance can act as fall back for the Production line when not sharing critical components. Hardware as physical component is very often shared by having it in a datacentre.
    ✅ The classic DR, Disaster Recovery, focus on recovery of losing physicals.

    Aside the physical access losing the logical information is a risk. It could happen by accident unintended but also intended by a bad person. Ransomware is an incarnation of intended logical destruction.
    ✅ The classic BR, Backup Recovery, focus on recovery restoring logical content.

    Availability mitigations
    Having those recovery strategies for physical and logical components, they are costly and you never really want to use them.
    😱 I never will understand why the responsible accountable ones are doing cost savings by ignoring recovery strategies.
    Buying some expensive physical components for physical hot stand by and than doing a claim logical content recovery is not necessary anymore, is not understanding risks.
    DR exercises for physical recovery, only successful after several attempts are a fail.
    ⚖ 😉 By understanding the business impact, acceptable choices are possible. Why would you need development for a limited time when doing a technical migration?


    Information flow, using closed loops
    Monitoring what is going on, closed loops on operational flows should be in place. When there is a high vital product data an additional flow evaluating new options before changing process flows is needed. In a figure: Process informationflow

    R-2.4.2 Data / Information Governance: Flow
    Confidentiality, Integrity, Availability
    A figure,
    See right side:


    R-2.4.3 Data / Information Governance: Explainablity
    solving missing: backstop, parallel alternative selection
    solving missing: backstop, parallel alternative selection

    Initiatives Result & More.

    SMART buzsword to get an ohter name by reordering and using others words.
     
    SIAR not STAR, PDCA
    When seeing and recognizing an issue when involved and committed to the process the following questions arise:
    1. Why?
    2. Possible improvements?
    3. Who can help?
    4. What can I do?
    5. When to do it?
    siar not STAR PDCA
    Situation going for Initiatives that are by Actions getting into Results: SIAR

    "T" (Task) replaced by I. Tasks are dictated. Initiatives are using the experiences from what is going on.

    The PDCA cycle is the same, shifted IARS. Do is actions and Act (decide what next) analysing the Situation.

    The external input, external service provision (left) and external ouput, external delivery support (right) is not in a logical time order. To break this illogical order somewhere in the continous cycle a start must be made.

    Processes - Continuity, Availability - DR BCM.

    Business continuity is not only about having a well secured physical an logical environment but also: what to do when some things are going terrible wrong. Losing the physical access or the logical can stop all processes for an organisation.


    order in logic

    R-2.5 Solving: Communication Cooperation

    The simple question: "Whose Job Is It, Anyway?"
    There was an important job to be done and Everybody was sure that Somebody would do it. Anybody could have done it, but Nobody did it. Somebody got angry about that, because it was Everybody´s job. Everybody thought Anybody could do it, but Nobody realized that Everybody wouldn´t do it.

    It ended up that Everybody blamed Somebody when Nobody did what Anybody could have.
    R-2.5.1 Naming classification conventions libraries
    R-2.5.3 Thesaurus using domain knowledge LLM-s
    the problem: missing thesaurus
    What struck me was that in the discussions terms often came back that could mean the same thing. Sometimes they talked about interventions, sometimes about subsidies and sometimes about regulations or openings.
    The coordination took the project a lot of time. This rised the idea to recognize and define these concepts and to set up a conceptual information model for the agricultural domain. The starting point? The European Common Agricultural Policy (CAP), the associated EU regulations and the Dutch reflection thereof: the National Strategic Plan (NSP), supplemented with NL legislation. Many hundreds of pages of 'pure reading pleasure'!
    But in those many hundreds of pages was also the challenge: how could we get all those years of knowledge and expertise from the documents and heads of the experts onto 'paper', without sedating them, kidnapping them and interrogating them 24/7 in a shed somewhere?

    The solution for miracles of our time: artificial intelligence.
    What was the value of a VE 'virtual expert' for the entire modeling process? The VE played an important role in the modeling process and the process consisted of the following steps:
    1. Drawing up a model of concepts for the domain
    2. Organizing concepts and recognizing data areas
    3. Defining the concepts, using the right sources
    4. Drawing up concrete examples of these concepts
    5. Drawing up the information model based on the example sentences
    6. Validating the information model using example sentences.
    How to set up the 'virtual expert'.’? It's actually very simple (and not rocket science at all)! Below is a step-by-step plan with simple 'prompts' with which you can build your own 'virtual expert'. By the way, used was OpenAI's ChatGPT, but perhaps better/other versions are available.
    1. Create an account at openai.com
    2. Go to the option "my GPTs"
      Please note: you must have a paid account to build a GPT yourself.
    3. Select the option: "create a GPT"
    4. Select the option: "“configure"
    5. Give the GPT a name such as "Subsidy Expert"
    6. Give the GPT a description: "This expert knows all the types of subsidies that exist in the Netherlands."
    7. Give the GPT a series of instructions in the instruction field (experiment!):
      You are an expert in the field of . You communicate briefly and concisely and use informal language and preferably no bullet lists (unless I ask for it). You always search first in trained documentation and only then in other knowledge sources. You set up non-circular definitions and do not use terms that are synonymous with the concept to be described. etc...
    8. Then upload the desired files - what is the context that the GPT needs to know?
      Note: only provide open data to the GPT with this!
    9. Tweak the model, adjust it as you wish... good luck!.

    Connecting a thesaurus to naming standards
    A thesaurus is a logical theoretical construct, naming standards, usage standards are realisations for coding and reporting. Data, information has stages in a cycle (six):
    1. Plan: what kind of information is needed
    2. Design: preparation with scale of measurements
    3. Realisation: create A thesaurus, master data management system
    4. Manage: store, archive or destroy the content for the thesaurus information
    5. Usage: simple usage of specified elements in the thesaurus
    6. Insight: Getting wisdom using interactions of multiple elements in the thesaurus
    In practice the challenge is to build a thesaurus from the specialist doing the their work.
    💡👁❗ Preparing for a data literacy structure: "Data driven work".
    The evolution of knowledge to insight
    My first site was an description of what has happened.
    The second tried to improve on to get prescriptive on how it should work.
    This third is going to predictive, from what has happened to what is possible to happen.
    Old useful information should get a place in the new structure. It will give some waste that is not really waste. The driver behind the last change is the idea of a framework and related tools "Jabes". The problem with that idea is that there is no good fit in the existing situation of ICT.
    A needed correlated change:
    Confused-2

    R-2.6 Maturity 4: ICT service solutions for gaps

    From the three ICT, ITC interrelated scopes: Only having the focus on IT4IT, getting a mature Life Cycle Management (LCM) requires understanding an acknowledgment of the layered structure.
    Each layer has his own dedicated characteristics.

    R-2.6.1 Leaving the comfort zones
    Servicing technology - release change, safety
    Just understanding some better technology concepts does not really help in changing a culture that is blocking in improvements.
    😲 For release management there are for a long time legal obligations. I have never seen those been implemented. The biggest impediment is a mindset switch: 😲 A safe (Cyber security) is having for al long time legal obligations. I have never seen those been implemented. The biggest impediment is a mindset switch: it is an organisation accountability and responsibility not something that is to be outsourced as just technology.
    It starts at the organisation with risk management.
    Servicing technology - information flow, master data
    Just understanding some better information flow does not really help in changing a culture that is blocking in improvements.
    😲 For information flows there should be a valuable theory. I have seen things been implemented, however all got removed. The reasons are: cost argument and conflict avoidance.
    😲 Master data, data dictionaries are defining context for information, information flows. I have seen once that was implemented, got dismantled with the cost-saving argument. The biggest impediment is a mindset switch it is an organisation accountability and responsibility that this is not technology but of high value for the organisation.
    It starts at the organisation with a shared glossary shared vocabulary understanding each other.
    R-2.6.2 Detailed information for service solutions
    Servicing technology
    The first idea with Jabes was: it would be a technology solution.
    After analysing, see "C-Server", that proofed to be wrong. However technology has impediments for a structural change using lean. There are solutions to solve those impediments but wanting to do those is having cultural impediments.
    In the practical technology area I have many old pages, some are theoretical. They should be subpages in one of the two technology areas (C-Serve r_serve).

    detailed descriptive information at ..
    technology theoretical - Life cycle
    List of detailed old pages (sdlc): List of detailed old pages (meta relocated to sdlc):
    detailed descriptive information at ..
    technology theoretical - information flow
    List of detailed old pages (design_data to convert): List of detailed old pages (design_bianl):
    detailed descriptive information at ..
    technology structure practical
    To relocate, old page, not at "C-Serve" "r_serve" location: To relocate, old page, or not at "C-Serve" "r_serve" location:
    R-2.6.3 The silver bullet
    lightening striking a tree ..
    Servicing technology, fear of missing out (FOMO)
    Wondering why nothing has changed for many years, repetition of similar developments and events?
    The technology buzz:
    lightning strikes (Bill Inmon sep 2024) Recently in the morning I was talking with a venture capitalist. We were discussing technology, the marketplace, trends, and what is current – AI, ChatGPT, generative AI, and current trends. I was talking to the venture capitalist about technology that produced business value for the corporation. I was talking about a corporation making more money, becoming more profitable, and having more customers. I had always assumed that that is what corporations wanted to do.
    The venture capitalist interrupted me and said – that isn’t what people are interested in. People today are interested in buying into the technology of AI for the sake of having AI. People are really into technology that is cool. Organizations are heavily into FOMO – fear of missing out. Corporations don't want to be considered to be behind the times so they have to bring in AI, in one form or the other. Making money and making more revenue is just not what sells today.
    😱 Did I hear that right? Corporations are buying into the cool factor without considering the business value? Is that really true?
    Then that same afternoon I was talking to a consulting company. And lo and behold I had the same conversation. The head of the consulting firm told me that technology was selling for the sake of technology, not for the enhancement of business value. He said that corporations just weren’t interested in enhancing business value. What corporations were interested in was appearing to be a modern corporation to the outside world. Corporations just weren’t interested in more profitability, more revenue and more customers.
    😱 To be honest, I could not believe what I was hearing.
    ➡ I assume everybody with a long experience at organisations is recognizing this. It is not very often that a clear statement like this is made.
    Hasn't the world learned anything from the many silver bullets that IT has brought to the business community over the years? Hasn't the world learned to – first and foremost – ask for business value? How many silver bullets have been presented as "the solution" only to disappear in a year or two. If a technology does not bring business value then it has no staying power and will be swept away with the next tide.
    The definition of insanity is to do the same thing repeatedly and to expect a different outcome. The corporate community is having decisions being made by either very naïve people or insane people. (Does it matter which?) Corporations keep buying into the silver bullet expecting their technology and IT problems to disappear.

    ➡ Recognizing what is going on is the first step in awareness. I am hoping at some moment there is turning point leaving this weird cycle of just following others.
    🤔 If technology – any technology – does not fulfill business value then that technology is not long for the earth. And corporate management – once again – will have wasted a lot of money and opportunity. Once again corporate management will have spent huge resources on the silver bullet.
    🤔 In the Gartner hype curve the vendor produces a tremendous amount of hype in order to establish a product. But when the corporation enters the trough of disillusionment, it is only genuine business value that pulls the technology back into a state of positive acceptance. If there is no business value there, the technology withers and dies.
    In a word – if technology is being bought and sold on the basis of cool, then that technology is in danger of being just another failed silver bullet. In order to have a long term, sustained presence and value in the marketplace, the technology MUST produce viable, measurable, prodigious business value. That is the most important and the immutable role of new technology if the technology is to survive.

    feel lost ...
    Servicing technology, Improvements
    Wondering why nothing has changed for many years, repetition of similar developments and events? The culture desillusion: why todays leaders dont value tps lean (Kevin Kohls sept 2024)
    Convincing leadership to adopt the cultural shift that comes with internal Continuous improvement. This is where the howls of contention starts. The list is long and filled with denial and finger-pointing.
    🤔 In reality, people hired in 2000 are often in leadership positions 24 years later. They have had minimal success with many of these methods since starting their jobs. They have seen the 2008 recession, COVID-19, chip shortages, and profitability rise because demand is high. They have seen the rise of robotic automation on the plant floor and in the office and the sudden rise of artificial intelligence. These outside influences go beyond internal productivity. They have a greater chance of support and possible impact than lean, TOC, etc.
    🤔 To this leadership group, outside influences not under their control have been the focus of their attention, not internal Continuous Improvement. We can complain about the lack of leadership support, but we must accept that leadership will spend budget and headcount.
    ➡ We all see the struggles at organisations not knowing well how to change to achieve real improvement. The initiatives for a profitable lean approach requires seeing the C-roles as customers. It is not very often that a clear statement like this is made.
    🤔 The other realization is that we have little empathy for leadership challenges. Although we have never been leaders, we quickly point out that they are the root cause of the problem. We have no responsibility to change that.
    The reality is quite different. First, we forget that leadership is OUR customer. Like any customer who buys a product on Amazon, customers will buy things that adequately address their product or service, often based on the product's Five Star Rating.
    We in Continuous Improvement don't do that. We try to convince them to buy a product with a poor reputation, do it regardless of the costs, and tell them what they should see as valuable as customers.
    ➡ We all see the struggles at organisations not knowing well how to change to achieve real improvement. The initiatives for a profitable lean approach requires seeing the C-roles as customers. It is not very often that a clear statement like this is made.
    🤔 In addition, we have little empathy for their position. They got the promotion and should be able to figure it out. They are not a peer anymore. They are getting paid big bucks. Let them figure it out by themselves. We have explained the brilliance of our solution, and we can’t help it if they don’t see its rationale.
    We fail to realize that commitment, which includes a commitment to a limited budget, will have to address their problems, not ours. We must recognize that their decision to adopt a CI mindset will be more emotional than logical. We may think they have access to vast amounts of money, but they only have a small budget, which they must see an ROI on if they want to continue to support this effort.
    Finally, we think leaders should all be like someone else or run like another company. They should have products like the iPhone and the mature kaizen mindset of Toyota, be driven to success regardless of the barriers, yet be sensitive to our daily needs—the wisdom of Gandi, the passion of Steve Jobs, the fierceness of Patton, etc. ...
    ➡ We all see the struggles at organisations not knowing well how to change to achieve real improvement. The initiatives for a profitable lean approach requires seeing the C-roles as customers. It is not very often that a clear statement like this is made.
    The lack of empathy to solve leadership problems is the biggest failure on the part of CI. This is a fundamental mindset change in Lean.
    R-2.6.4 Getting ITC in control
    clock spirals ..
    Servicing technology - release change, safety
    Just understanding some better technology concepts does not really help in changing a culture that is blocking in improvements.
    😉 When release management gets improved do not expect it will be get finished. Continuous improvement is what matters.
    😉 When safe (Cyber security) gets improves by design, do not expect it will be get finished. Continuous improvement is what matters.

    Servicing technology - information flow, master data
    😉 When better information flows are getting understood & implemented, do not expect those will be get finished. Continuous improvement is what matters.
    😉 When Master data, data dictionaries for information context, information flows gets into place, do not expect it will be get finished. Continuous improvement is what matters.

    🎯 ✅-GAP CI-SDLC CI-SAFE CI_Inform CI_Mdata CMM5-4IT 🎯
      
    🚧  👁-GAP F-SDLC F_SAFE F_Inform F_Mdata CMM3-4IT 🚧
      
    🔰 Contents E_SDLC  C_SAFE  E_Inform C_Mdata CMM0-4IT 🔰


    R-3 ICT service adding value to missions of organisations


    advice request Pythia

    R-3.1 Avoiding ICT Service Gap types

    Understanding what is going on what with all uncertainties and possible future scenarios is an everlasting quest. A pity when there are a lot of misunderstandings by not having a shared ontology shared vocabulary. A system that supports the ICT ITC Service and change Shape transformations is a gap.
    Building up in mind set complexity:

    R-3.1.1 Knowing understanding by ontology
    Business demo J.Dietz
    Component: Enterprise Ontology 101
    Enterprise Engineering, Enterprise Ontology is a good starting point for reviewing what is data about. Enterprise Engineering the manifesto There are two distinct perspectives on enterprises (as on all systems): function and construction. ... The key reason for strategic failures is the lack of coherence and consistency among the various components of an enterprise. ...
    It is the mission of the discipline of Enterprise Engineering to develop new, appropriate theories, models, methods and other artifacts for the analysis, design, implementation, and governance of enterprises by combining (relevant parts of) management and organization science, information systems science, and computer science.
    Ontology is the philosophical study of being. Abstract objects are closely related to fictional and intentional objects.
    The ontological model of a system is comprehensive and concise, and extremely stable.
    It is the duty of enterprise engineers to provide the means to the people in an enterprise to internalize its ontological model.
    Separation of intention and content will create a new field - enterprise engineering - and make that intellectually manageable. Administrative systems are processing data - information - for al long period using computers. Information is about abstract objects. Options for measuring processes were limited as computer resources were limited and expensive. Options to mange abstract objects were limited. That is all changing.

    istqb
    Example differences visions vs missions: ISTQB
    A nice example on the difference between vision mission and what to do for missions.
    (International Software Testing Qualifications Board , who we are):
    ISTQB has the vision:
    Defining and maintaining a Body of Knowledge which allows testers to be certified based on best practices, connecting the international software testing community, and encouraging research.

    There is long list with missions:
    R-3.1.2 Start with what drives success
    focus on what really drives success
    Let's put the agile wars to rest and focus on what really drives success. (Wolfram Müller sep 2024) Agile Methods are always the best and are defended by their people! We've all heard the heated debates: SCRUM vs KANBAN, SAFe vs Less and XP was always better ... ... endless arguments about which method is the ultimate way to run an organization. But let's be honest – if any one method was the silver bullet, wouldn't the debate be over by now? The fact that the discussion still rages shows one thing: none of them are perfect. There must be something more.
    Question: Why is project management the one for all attention and not the core products that are the ones for the value? In my live I went through both extremes, just projects & just product ... in the end it's all about scaling.
    In bigger organizations you need a production value stream ( ➡ run) and a project value stream ( ➡ change), but both cases need FLOW.
     feel lost Question, Remark: I am missing the closed loops feedbacks for informing what is going on and wether improvements are achieved in an objective measurable understandable context for the defined agreed flows.
    The reply:
    Yes that is often missing, maybe you read it in my comment, you have to have parts of the organization taking care of the product.
    What we often see is that even in product oriented organizations these parts are disconnected from the customer and customer value stream. They often have no idea about the constraint of the customer and better the constraint of the customers customer. So I'm fully with you, but you can foster the feedback loops also in project environments.
    At "1and1.com" (now ionos) where i was responsible for the PMO we had special project streams for products they delivered new products (not just variations) within weeks (2-6). That is just possible if you have a full flow project organizations e.g. the roll out of the DSL 16MBit (ok it's some time ago). The project idea came up at the 28-11 and the first customer was online 07-01, no chance with a production organization. That is just possible with projects and maybe you remember we had daily feedback to swallow from the customers :-).
    The balance between change and run.
    ➡ If you look at an organizations from outside you don't see the inner structure, you just see typically one value stream. The focus should be there: what product delivers a real value (e.g. solves a constraint from the customer).
    ➡ There is typically a second stream, updates of the product or new products, this is more hidden.
    If you try to optimize the team, then you get mega silos and the constraint in the value stream gets overloaded and the overall performance drops. The focus should be on the constraint of the value stream (build & round) and often (marketing/sales).

    R-3.1.3 Looking for a chart representing the enterprise area
    charting a virtual floorplan
    Cartography, the art and science of graphically representing a geographical area, usually on a flat surface such as a map or chart. It may involve the superimposition of political, cultural, or other nongeographical divisions onto the representation of a geographical area.
    An abstraction of the shop floor for the virtual administrative world is not very common. In the 6*6 representation of areas with ordered axis there is a start for a map. From many dimensions a projection into one of 2 dimensions. Limiting that to what arround somebodies position, is seeing only 9 areas.
    Lean at the shop floor genba2 genba3
    For the: Way of Working at the shop floor there are two perspectives: WOW shop-floor
    In a figure:
    See right side.
    R-3.1.4 Jabes Vision: Change ICT into a service culture ITC
    Jabes for ITC ICT a vision for a cultural mindset.
    The value stream has mandatory requirements to be fulfilled to be shown for the products in the portfolio. It is a challenge how to create instructions to be followed managing the product process flow.
    During design and validation of the product the solutions for the challenges should get materialized. As long the product is relevant, that is customers are using it or are able to refer it, the information of the portfolio product for dedicated version should be at least retrievable.
    To get covered by information functional knowledge by a portfolio: Going for this approach to deliver to customers the mindset at the service organisation should be ready with that mindset. Jabes process Assurance
    In a figure:
    See right side.
    Practice what you preach
    To get covered in knowledge by a portfolio: Jabes generic process
    In a figure:
    See right side.

    Portfolio Process: ideate, initiate & technology validation in an infographic:
    Goal: Processes known & in control by all their asspects.
    Craftmanship, deliverables & receipts:
    ◎ ⇄ reviewing: verified: processes maturity levels (independent)
    ◎ ⇄ reviewing: quality of processes results delivered & expectations
    ◎ ⇄ reviewing: quantity of process load delivered & expectations
    ⇄ ⇆ Change and innovation alignment, following and/or initiating
    ⟳⇅ Evaluating Proposals innovation & changes for processes & technology
    ⟳⇅ Evaluating Proposals expected capacity load for processes & technology

    Jabes portfolio Change
    ICT becoming a customer of ITC
    The jabes framework, related tooling, knowledge and way of working should get promoted.
    The most logical first step, there must be someone with interests, but who?
    💰 The one accountable for a product, responsible for managing what is in the portfolio.
    A big organisation is able to have all knowledge and skills for information technlogy service in house. A small company is dependant on the service servicee Improvement idea:
    💡👁❗ Standardise the Information Service so that the quality for product support and safety is well known and controlled. All whether big or small not suffering by unpredictable uncertainties.

    📚 What is going on is that the software devlopment cycle is entangled wiht qualiy testing in a different time window and both are entangeld wiht safety. Only the Product accountabilty (CPO role) and Safty accountability (CSO role) will alwasy be within an organisation.

    Improvement ideas:
    1. A clear organisational role for the Chief Product Officer.
    2. A clear organisational role for the Chief Safety Officer.
    3. A new organisational structure for: "Closed Loops".
    4. A data literacy structure: "Data driven decision making".
    5. An approach for going holistic lean agile at all genba levels.
    6. A new organisational structure for: Information Service.
    At the next paragraphs each of them is a result of reasoning.
    advice request Penelope

    R-3.2 Continous improvement of Systems

    Managing information systems in a continuous changing world requires continuous adapting and being prepared for abandoning obsolete solutions creating new solutions in time. Change is the only certainty. A pity when there are a lot of misunderstandings by not having a shared vision, mission.
    Building up complexity by mind set:

    R-3.2.1 Knowing, understanding by measuring
    The long running technical battles
    💡 Let's put the technology wars to rest and focus on what really drives success.
    We've all heard the heated debates: Windows vs Linux, Oracle vs another DBMS and (name: a programing language) was always better. Endless arguments about which method, technology, is the ultimate way to run for an organization. But let's be honest, if any one method, technology, was the silver bullet, wouldn't the debate be over by now? The fact that the discussion still rages shows one thing: none of them are perfect. There must be something more.
    istqb
    Process, algorithms quality
    Only developing a system is not telling anything on the quality what has been build. Testing, verifying is a capability that solves the the question of wat level of qualtiy is what has been build. This is measuring what can be measured and doing a comparision to what is needed by requirements and /or standards.
    ISTQB is specialised in test qualifications. From Test Analyst an ICT specialist:
    Test conditions are typically identified by analysis of the test basis in conjunction with the test objectives (as defined in test planning). Testing in the Software Development Lifecycle:
    ISTQB Advanced Level Test Analyst (syllabus): The overall SDLC should be considered when defining a test strategy. The moment of involvement for the Test Analyst is different for the various SDLCs; the amount of involvement, time required, information available and expectations can be quite varied as well.
    The Test Analyst must be aware of the types of information to supply to other related organizational roles such as:
    Planning for testing is a challenge because test execution only is possible after something is delivered. Documentation are deliverables, code is a deliverable and a working system in a environment is a deliverable.
    Test activities must be aligned with the chosen SDLC whose nature may be sequential, iterative, incremental, or a hybrid of these.
    For example, in the sequential V-model, the test process applied to the system test level could align as follows: Iterative and incremental models may not follow the same order of activities and may exclude some activities. ...


    Measurements, fundamental theory
    Doing testing requires a measurement applicable for what to test.
    "In physical science, the first essential step in the direction of learning any subject is to find principles of numerical reckoning and practicable methods for measuring some quality connected with it.
    🤔 I often say that when you can measure what you are speaking about and express it in numbers, you know something about it; but when you cannot measure it when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind.
    🤔It may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the state of Science, whatever the matter may be.”
    (Lord Kelvin, 1893, Lecture to the Institution of Civil Engineers, 3 May 1883)
    R-3.2.2 Start with understanding the requirements
    Doing testing requires a measurement applicable for what to test.
    Requirements Elicitation (G.Alleman) 2024)
    Requirements the fundament for design, the why.
    Needed change in the IT world
    Here's some top-level guidance. But first, a fundamental change needs to take place in the IT world regarding how to capture requirements. It's called Capability-Based Planning. Identifying System Capabilities is the starting point for any successful program. Systems Capabilities are not direct requirements but statements of what the system should provide regarding "abilities."
    Requirements the fundament for design, functionality, capabilities.
    Capabilities Based Planning
    The critical reason for starting with capabilities is to establish a home for all the requirements. To answer the question, "Why is this requirement present?" "Why is this requirement needed?" "What business or mission value does fulfilling this requirement provide?"
    Capabilities statements can then be used to define the units of measure for program progress. Measuring progress with physical percent complete at each level is mandatory for the technical assessment of the project's progress.
    However, measuring how the Capabilities are fulfilled is most meaningful to the customer. The "meaningful to the customer" unit of measure is critical to the success of any program. These measures are necessary for the program to be cost-effective, scheduled, technically successful, and fulfill its mission.
    👁 Starting with the Capabilities prevents the "Bottom-up" requirements gathering process from producing a "list" of all needed requirements that are missing from a well-formed topology.
    This Requirements Architecture differs from the system's Technical or Programmatic architecture.
    Capabilities-based Planning (CBP) focuses on "outputs" rather than "inputs." These "outputs" are the mission capabilities that are fulfilled by the program. Requirements need to be met to fulfill these capabilities. But we need the capabilities first. Without the capabilities, it is never clear whether the mission will succeed because there is no clear and concise description of what success means.
    The CBP concept recognizes the interdependence of systems, strategy, organization, and support in delivering capability and the need to examine options and trade-offs regarding performance, cost, and risk to identify optimum development investments. CBP relies on scenarios to provide context for measuring the level of capability.

    Requirements the fundament for design, performance.
    Requirements Elicitation
    Requirements are the defined attributes for an item before the efforts to develop a design for that item. System requirements analysis is a structured or organized methodology for identifying an appropriate set of resources to satisfy a system need (the needed capabilities) and the requirements for those resources that provide a sound basis for designing or selecting those resources. It acts as the transformation between the customer's system needs and the design concept implemented by the organization's engineering resources. The requirements process decomposes a statement of the customer's need by systematically exposing what the system must do to satisfy that need. This need is the ultimate system requirement from which all other requirements and designs flow.
    There are two fundamental classes of requirements: 👁 There are functional and non-functional requirements as well as product and process requirements. These non-functional requirements play a significant role in the development of the system. Non-functional requirements are spread across the entire system or within individual services and cannot be allocated to one specific product artifact (e.g., class, package, component).
    This makes them more challenging to handle than functional requirements. The specifics of the system's architecture, such as highly distributed services, also raise difficulties.
    The Work Breakdown Structure (WBS) foundation distinguishes between process and product requirements. The related Integrated Master Plan (IMP) and Integrated Master Schedule (IMS) also focus on this separation. The success of the project or program depends on defining both the product and the processes that support or implement the product.
    When properly connected, the Requirements Taxonomy, the Work Breakdown Structure, the IMP, and the IMS "foot and tie" the Performance Measurement Baseline (PMB). This provides traceability of the increasing maturity of the deliverables (vertical), and the physical provides the percent completion of the work efforts (horizontal).

    Requirements the fundament for design, build & test.
    Step-by-Step for Capabilities
    1. Determine what the system is supposed to do in terms of Scenarios or Use Cases. This is a familiar approach. Alistair Cockburn introduced the notion of Use Cases long ago. But they went astray because doing good Use Cases requires understanding what capabilities the customer needs. What is the business problem or mission to be accomplished? How would you recognize that this problem was solved or the mission was accomplished? Measures of Effectiveness are the units needed to confirm this accomplishment.
    2. Assemble these Capabilities into a functional architecture, showing how each capability supports the mission or business need.
    3. Develop a maturity flow for each capability, showing how the presence of this capability allows the business or the mission to do some work. This, of course, is simple agile-working software. However, agile now sees that bottom-up response to customer needs requires a programmatic architecture framework to ensure that the end is reached as planned. This is Stephen Covey's Habit 2, Begin with the End in Mind, and is the Integrated Master Plan / Integrated Master Schedule paradigm of DOD 5000.02 procurement. Imagine that 5000.02 and Agile are on the same page. See Agile+EVM=Success for guidance here.
    As each capability appears, the project can start producing valuable services—do something useful. But—and this is critical—the business or the mission MUST be capable of receiving this capability. It does no good to have capabilities that cannot be used. The purpose of this diagram is to show what capabilities are needed and in what order. This is a Top-down process done by the business or mission owners.
    A Simple Step-by-Step for Requirements Elicitation
    1. There need to be process and product requirements for each defined capability .
    2. Each requirement MUST flow from a needed capability. It requires a reason for being there, a parent, and fulfilling some needed capability.
    3. As an aside - all requirements are derived. This means all requirements are derived from the needed capabilities. In some circles, this is not the paradigm, but in the complex, software intensive world of space flight and weapons systems - All Requirements Are Derived from the Mission Statement or the Concept of Operations (ConOps).
    4. These two documents usually need to be included in the IT world, but the resulting gap is that we need to know WHY we're doing something.

    R-3.2.3 Looking at, understanding quality management
    istqb
    ISTQB - Agile Body of knowledge
    Testing methodologies using sound underpinned theory and metrics.
    Agile test leadership draws upon methods and techniques from traditional software quality management and combines these with new mindset, culture, behaviors, methods, and techniques from quality assistance. ISTQB Agile Test Leadership at Scale (Body of Knowledge ):
    👁 Traditional test management has a tendency to focus on managing and controlling the work of others. Test management in the agile organization has a broader scope than solely focusing on testing the software. By shifting agile test management to a quality assistance approach, agile test leaders spend more time enabling and empowering others to do the test management themselves. The aim of this support is to contribute to the improvement of the organization’s QA and testing skills with a view to enabling better cross-functional team collaboration.
    👁 Business agility also drives the move away from traditional management roles toward self-empowered delivery teams and enabling leaders (also called servant leaders or leaders who serve). As a consequence, people in roles such as project manager and test manager sometimes struggle to find their place in organizations moving toward business agility. This shift means that traditional roles (ad 1), such as test managers, test coordinators, QA engineers, and testers, need to dedicate more time and effort to foster the necessary quality management related skills and competencies throughout the organization rather than actually doing all the testing.
    Agility, Lean, effective efficacy, as business culture.
    👁 With business agility there is a move toward preventing rather than finding defects, to optimize quality and flow. Automation, “shift left” approaches, continuous testing, and other quality activities are necessary to keep pace with the incremental deliveries of customer-focused organizations. These practices are often described using the concept called “built-in quality”. Additionally, there is also a move to “shift right”. “Shift right” practices and activities focus on observing and monitoring the solutions in the production environment and measuring the effectiveness of that software in achieving the expected business outcomes. These practices are often described using the concept called “observability”.
    👁 Moving to a quality assistance approach provides many opportunities to reinforce the view that quality is a whole-team responsibility across the entire organization. One way is for the organization's management to support collaboration within expert groups, often known as communities of practice (CoP). The expert groups main goal should be to go to places where the work happens and work with delivery teams to spread knowledge and behavior.
    A successful implementation of quality assistance as a quality management approach results in: There are many other positive outcomes of quality assistance, which will be covered in later chapters.
    ad 1/ Naming convention of roles differs from organization to organization.


    R-3.2.4 Jabes vision: Servicing the Chief Product Officer
    The jabes framework and tooling must have a sponsor aside other stakeholders.
    What would be the most logical sponsor?
    💰 The one that accountable for a product, responsible for managing what is in the portfolio. There is a fiancial budget needed.
    Product quality is intangible in the flow of a product in the complete product lifecycle. The consequence should be: clear accountabilty and resonsibilities for all involved in the flow.
    WOW prodcut manager shop-floor boardroom
    The postion of A CPO.
    In a figure:
    See right side.

    Business Analyst supporting product delivery,
    Safety Analyst in the feedback loop: risks & technical performance.

    Improvement idea:
    💡👁❗ A clear organisational role for the Chief Product officer.
    At "control & command" (I-C6isr) for more what roles are needed and are changing.
    change and threats about save place

    R-3.3 Continous improvement of Safety

    In general, compliance means conforming to a rule, such as a specification, policy, standard or law.
    Governance, risk management, and compliance are three related facets that aim to assure an organization reliably achieves objectives, addresses uncertainty and acts with integrity.
    Building up in mind set complexity:

    R-3.3.1 Safety some principles for realisations
    Generic stages DDD
    The stages in the safety:
    Archiving, usefull information
    Generic guidelines
    How to manage technology?
    Functional gap: Ethics conduct code
    🤔 An attitude is important for this role, an ethical mindset. From ACFE (Association of Certified Fraud examiners) . A code of professional ethics. Replacing the special ACFE case to "-"
    The work that - professionals perform can have a tremendous impact on the lives and livelihoods of people and organizations. It is therefore crucial that members - exemplify the highest moral and ethical standards. All - must agree to abide by the Code of Professional Ethics.
    R-3.3.2 Safety several basic attention areas
    office garden - auditing monitoring
    Archiving, Auditing monitoring.
    The most forgotten area of requirements to fulfil for a long term stability, multiple goals for archiving information (data): Auditing, monitoring, is better known because external auditors are requiring information for underpinning signing annual financial reports.
    ❗ The limitation is that is easily getting limited to what those auditors are asking.
    💣 Missing the real reason behind those questions: just making the signoffs happening.

    single network, multiple locations service points
    Business Continuity Management (BCM)
    Lose of assets can disable an organisation of functioning. Risk analysis is to decide to what level continuity in what time at what cost and what kind of loss is acceptable. Multiple mitigation options: ❗ An important issue: what components could get compromised at the same time. Without any isolated verified fall back approach, it is possible not being able to recover.
    💣 BCM has visible costs for implementations but no visible advantages and/or profits.

    Self srvice - logging monitoring
    logging monitoring.
    Logging is tracing the events in a system, goals are: 🚧 ❗ The Security Operations Centre (SOC) is a spin off with the tasks of evaluating monitoring and reaction on events that possible are compromising integrity of systems and/or breaching information.

    R-3.3.3 Safety standards, body of knowledge
    A 360-degree safety view
    From What would a 360-degree approach to cyber security look like for the organization?
    A 360-degree approach to cybersecurity for an organization would involve a comprehensive and holistic approach to protecting the organization's assets, both online and offline, from cyber threats. The approach should address all aspects of cybersecurity, including people, processes, and technology. ome of the key elements of a 360-degree approach to cybersecurity include:
    1. Conducting regular risk assessments.
    2. Providing regular training to employees on how to identify and prevent cyber threats.
    3. Implementing strict access controls to limit access to sensitive data and systems.
    4. Implementing firewalls, intrusion detection systems, and other security measures.
    5. Data encryption & backup solutions, goal: Data protection & business continuity.
    6. Developing and testing incident response plans.
    7. Ensuring that the organization is compliant with regulations and standards.
    8. Monitoring network and systems for suspicious activity. Actions in case of any breach.
    9. Regularly assessing the security protocols of third-party vendors and partners.
    10. Communicating the cyber security plan and policies to the employees and stakeholders.
    Implementing a 360-degree approach to cybersecurity requires a significant investment of time and resources, but it is essential for protecting the organization's assets and ensuring the continuity of business operations.
    360 security
    In a figure:
    See right side.
    Content in the figure for the topics is weird. Restructuring what is there and reordering topics:
    Issue missing situational awareness of the business application.
    Org: Application - Data, Code security Org: Risk governance & compliance
    👁 Authentication & On-Boarding 👁 ISO 27001/HIPAA/PCI, SOC
    👁 Data Encryption 👁 Firewall Compliance and Management
    👁 Data Leakage Prevention 👁 Physical and Logical Reviews
    👁 Secure Coding Practices 👁 Audit and Compliance Analysis
    👁 Secure Code Review 👁 Configuration Compliance
    👁 Penetration Testing 👁

    At the header "application" it in fact about platforms, tools.
    Tech: Mobile security Tech: Platform security
    👁 Rogue Access Point Detection 👁 Web Application Security
    👁 Wireless Secure Protocols 👁 Web Application Firewall
    👁 OWASP Mobile Top 10 👁 Database Activity Monitoring
    👁 Mobile App Automated Scanning 👁 Content Security
    👁 Dynamic Mobile App Analysis 👁 Secure File Transfer
    👁 Mobile Penetration Testing 👁 OWASP Top 10 and SANS CWE Top 25
    👁 Log and False Positve Analysis 👁 Testing for vulnerability Validation
    👁 👁 Platform Penetration Testing
    👁 👁 Log and False Positve Analysis

    Issue missing situational awereness of the business application.
    Tech: Advanced threat protection Tech: Network security:
    👁 Botnet Protection 👁 Firewall Management
    👁 Malware Analysis and Anti-Malware Solutions 👁 Network Access Control
    👁 Sandboxing and Emulation 👁 Secure Network Design
    👁 Application Whitelisting 👁 Unified Threat Management
    👁 Network Forensics 👁 Penetration Testing
    👁 Automated Security Analytics 👁

    What has the header of infrastructure is only a part of infrastructure.
    Infra: Network security: Infra: Systems security:
    👁 DNS Security 👁 Windows/Linux Server Security
    👁 Mail Security 👁 Vulnerability/Patch Management
    👁 Unified Communications 👁 Automated Vulnerability Scanning
    👁 Remote Access Solutions 👁 Security Information and Event Management
    👁 Intrusion Detection/ Prevention Systems 👁 Log and False Positve Analysis
    👁 👁 Zero Day Vulnerability Tracking


    The source for eight domains: In security, the Common Body of Knowledge (CBK) is a comprehensive framework of all the relevant subjects a security professional should be familiar with, including skills, techniques and best practices. The CBK is organized by domain and is annually gathered and updated by (ISC) (International Information Systems Security Certification Consortium) to reflect the most relevant topics within the industry. The eight CISSP domains are the following:
    1. Security and Risk Management. ➡ Governance dealing with risk management concepts, threat modeling, the security model, security governance principles, business continuity requirements, and policies and procedures.
    2. Asset Security. ➡ Topics that involve data management and standards, longevity and use, how to ensure appropriate retention and how data security controls are determined.
    3. Security Engineering. ➡ The security engineering processes, models and design principles, including database security, cryptography systems, clouds and vulnerabilities.
    4. Communications and Network Security. ➡ Network security, the creation of secure communication channels, such as secure network architecture design and components including access control, transmission media and communication hardware.
    5. Identity and Access Management. ➡ System access, authorization, identification and authentication, including access control and multifactor authentication.
    6. Security Assessment and Testing. ➡ Tools needed to find vulnerabilities, bugs and errors in code and system security, as well as vulnerability assessment, penetration testing and disaster recovery.
    7. Security Operations. ➡ Deals with digital forensic and investigations, detection tools, firewalls and sandboxing, as well as incident management.
    8. Software Development Security. ➡ How to build and integrate security into the software development lifecycle. For secure development NIST.SP.800-218
    😱 The CCISP outline does not mention the difference between platforms and business applications.
    R-3.2.4 Jabes vision: Service to Chief Safety Officer
    Issue to solve: lack ofawareness ❌ for the difference between platforms, middleware, and business applications. The CISO role is only technical and unclear for accountabilities responsibilities.
    Safety is intangible in the flow of a product in the complete product lifecycle. The consequence should be: clear accountabilty and resonsibilities for all involved in the flow.
    WOW security shop-floor boardroom
    The position of a CSO.
    In a figure:
    See right side.

    Safety Analyst supporting delivery monitoring,
    Business Analyst in the feedback loop: performance & risks.

    Improvement idea:
    💡👁❗ A clear organisational role for the Chief Safety Officer.
    At "control & command" (I-C6isr) for more what roles are needed and are changing.
    Search for provisions

    R-3.4 Information processing adding value

    Improving the flow of value streams is the demand by an organisation. Being purposeful in continous improvement is the value of ITC services.
    💡 There is an issue:
    Changing an existing culture for doing work is a npk hard problem.
    R-3.4.1 Focus: value stream at the shopfloor
    Delivering a product in a pull push cycle
    What Is Systems Architecture And Why Should We Care? (G.Alleman) 2024)
    If we were setting out to build a home, we would first lay out the floor plans, grouping each room by function and placing structural items within each room according to their best utility. This is not an arbitrary process – it is architecture. Moving from home design to IT system design does not change the process.
    SystemArchitecture Grouping data and processes into information systems creates the rooms of the system architecture. The result of deploying an architecture is arranging the data and processes for the best utility. Many of the attributes of building architecture apply to system architecture. Form, function, best use of resources and materials, human interaction, design reuse, design decisions' longevity, and resulting entities' robustness are all attributes of well-designed buildings and computer systems. ...
    ❗ SDLC is associated with a software the change in this is: it is about systems.
    By adopting a system architecture motivation as the basis for the IT Strategy, several benefits result: references:
    1. A Timeless Way of Building, C. Alexander, Oxford University Press, 1979.
    2. “How Architecture Wins Technology Wars,” C. Morris and C. Ferguson, Harvard Business Review, March–April 1993, pp. 86–96.

    AIproductivityparadox
    AI not the silver bullet
    microsofts productivity paradox: data fix burnout or track it (F.Ferrer) 2024) The popular narrative is that AI will eventually replace most jobs, from administrative tasks to more complex roles. But here’s a controversial take: AI, when used properly, could actually help rehumanize work. Rather than eliminating jobs, AI can automate mundane tasks, allowing employees to focus on what they do best—creativity, strategy, and problem-solving.
    However, the key lies in how AI and data are deployed. Will AI empower employees, or will it simply create a more automated, less human workplace? If companies don’t tread carefully, they risk turning data-driven productivity tools into instruments of micromanagement, deepening the very burnout they aim to resolve.
    ❗ The everlasting productivity paradox is still going strong. Without solving what is holding back productivity in administrative cyber systems that will be continued.

    Value stream understanding
    Process mining is reverse engineering the value stream (VSM). It is far too complicated to starting with process mining without understanding the VSM. From: "Want to do a process mining project" slides and videos (vdaalst).
    🤔 The desing of value stream by humans often assumes seriality where some steps are possible by parallel execution. With unpredictable external events a process flow showing that is more applicable. The difference by an ideal process and the reality will become less. Checkpoints in the progress of a VSM will become necessary.
    Process mining W.vanAalst
    Simplistic sequential and more complex event driven in a figure:
    See left side
    🤔 Even then do not expect all process events will follow the expectation form the VSM map.

    ❗ Not all process events will follow the ideal expectation for a VSM. Several options of applicable flows can coexist, a reason for this is that parallel execution is possible. For example:
    Process mining W.vanAalst
    Several sequential process flows in a figure:
    See right side
    When doing measurements markers for these valid possible options must be in place.

    R-3.4.2 An extensive closed loop framework (I)
    Dashboard a380
    Ideate a floorplan, abstraction levels
    A post triggered a short discussion. It resulted in an idea how to improve information processing to help in decision making. There are an incredible number of good usable models out there, but we've lost our way by getting too caught up in the tech buzz. What we're doing is exchanging pretty abstract ideas between people with language and pictures.
    At the vertical axis:
    Abstract Words-1 Words-2
    1 Context 1 Governance 1 Direction
    2 Content 2 Organization 2 Form, texture
    3 Logical 3 Information 3 logical Contents
    4 Phyiscal 4 Data 4 Building blocks
    5 Details 5 ICT 5 Features

    Of course, there are more words to come up with, it is important that it supports the story, makes it stronger, gives it more structure. I made the link to Zachman with 3 theoretical and 3 practical interpretations arranged on a vertical ordered axis.
    The result is a 6*6 surface that can serve as a floor plan, the lean mindset, genba. For each cell a detailed "Why" is to answer.

    Ideate: "Closed loops"
    On the horizontal axis, you can get it ordered, organized with an underlying explanation:
    Abstract Words-1
    What Optimized business operations
    how Innovative service provision
    where Improved decision-making
    Who Control & Command
    When Portfolio change & knowledge assurance
    Which Culture, Behaviour

    😉 Lean and Agile are not about a goal of full automation into complicated dashboards. They are about automating repeatable and time-consuming tasks, so that there is a focus on the tasks with more added value. Always a balance between automation and human involvement should be in place. Continuous improvement with learning from all attempts at improvements is central as a culture.
    💡👁❗ A new organisational structure for: "Closed Loops".
    Development, engineering, architecting and operations, using, exploiting are two complementary human mindsets. Forcing exchange of people specialised in one of those two to do the other, is not respectful. In a cooperative team that is able to innovate you need both of them.

    preparations
    data literacy RRU stages:
    Optimized business operations
    Innovative service provision
    Improved decision-making
    The floorplan is the map of all 6*6 areas.
    What: Optimized business operations
    Bills of materials - theoretical plan:
    1. Understand "Decision makers" needs
    2. Create value that meets "Decision makers" needs
    3. Ensure feedback loops with "Decision makers"
    Bills of materials - practical realisation:
    1. Ensure feedback loops with "Decision makers"
    2. Implement short delivery cycles for "Decision makers"
    3. Focus on "Decision makers" satisfaction and experience

    How: Innovative service provision
    Functional Specs - theoretical plan:
    1. use lean principles to avoid the three evils: muda mura muri
    2. use tools that promote collaboration and integration (jabes build, test, operate)
    3. automate where it improves the efficiency of the whole
    Functional Specs - practical realisation:
    1. Automate where it improves the efficiency of the whole
    2. Minimize manual processes that are prone to errors
    3. Ensure continuous improvement of tools and workflows

    Where: Improved decision-making
    Drawings Geometry - theoretical plan:
    1. Encourage a culture of feedback and adaptation
    2. Use retrospectives to learn from mistakes
    3. Preferably implement small iterative changes
    Drawings Geometry - practical realisation:
    1. Preferably implement small iterative changes
    2. Ensure teams are continuously evolving their skills and knowledge
    3. Quickly implement changes and test new ideas (Jabes: suggestionbox, backlog)

    R-3.4.3 An extensive closed Loop framework (II)
    implementing & usage
    data literacy ACA stages:
    Control & Command
    Portfolio Change
    Culture, Behaviour
    The floorplan is the map of all 6*6 areas.
    Who: Control & Command
    Operating instructions - theoretical plan:
    1. Create multidisciplinary teams with clear objectives
    2. Promote responsibility and autonomy
    3. Ensure collaboration beyond hierarchical lines
    Operating instructions - practical realisation:
    1. Ensure collaboration beyond hierarchical lines
    2. Encourage continuous: knowledge sharing, learning processes (jabes: specifications)
    3. Use self-organizing teams for flexibility

    When: Portfolio change & knowledge assurance
    Timing diagrams - theoretical plan:
    1. Focus on optimizing the value flow
    2. Measure and analyze every step in the value stream
    3. Use information to make informed decisions
    Timing diagrams - practical realisation:
    1. Use information to make informed decisions
    2. Ensure a balance between speed and quality
    3. Eliminate bottlenecks and inefficiencies

    Which: Culture, Behaviour
    Design objectives - theoretical plan:
    1. Promote transparency throughout the organization
    2. Cultivate a culture of trust and openness
    3. create a blame-free environment where people feel safe to make mistakes
    Design objectives - practical realisation
    1. create a blame-free environment where people feel safe to make mistakes
    2. Create shared visions for missions
    3. Value diversity in thinking with methodologies

    R-3.4.3 Retrospective "Closed Loops", Genba-2
    "Closed loops"
    "Closed loops" refer to systems that continuously monitor, analyze, and optimize processes to improve efficiency and reduce waste. This integrates data and feedback mechanisms to create a sustainable and efficient flow of resources. This is a departure from traditional linear production models, aiming for circularity and sustainability. This structure is a subset of the proposal for a new DevOps structure.

    siar not STAR PDCA
    The SIAR model
    The model was created by experienced unhappiness of other models including the PDCA.
    1. Situation
    2. Initiatives or Inputs
    3. Actions
    4. Requests and Results
    For the flows left to right:
    1. The pull is at the bottom right to left
    2. The push is at the top left to right
    3. The cycle is clockwise starting bottom right
    4. Negotiations are controversial in flow continuity, limiting overproduction
    The PDCA and DMAIC cycles are the same but these are shifted into diagonals instead the vertical control, horizontal flow. Act (decide what next) analysing the Situation.
    The formal definition Situational awareness is often described as three ascending levels:
    1. Perception of the elements in the environment,
    2. Comprehension or understanding of the situation, and
    3. Projection of future status.
    People with the highest levels of SA have not only perceived the relevant information for their goals and decisions, but are also able to integrate that information to understand its meaning or significance, and are able to project likely or possible future scenarios. These higher levels of SA are critical for proactive decision making in demanding environments.
    Improvement idea:
    💡👁❗ A new organisational structure for: "Closed Loops".
    At "control & command" (I-C6isr) for more what data literacy awareness is needed.
    jabes save point

    R-3.5 Adapting Cooperative Communication

    Understanding the meaning of intentions is the first step for understanding the information by data sources.
    💡 An issue, a paradox :
    Creating an using data sources is by following steps. Continuous improvements is the path to success.
    R-3.5.1 Genba 1,2,3 using the virtual shopfloor
    The kind of decisions to understand
    Making Choices in the Absence of Information (G.Alleman) 2024) Decision-making in uncertainty is a standard business function and a normal technical development process. The world is full of uncertainty.
    🤔 Those seeking certainty will be woefully disappointed.
    🤔 Those conjecturing that decisions can't be made in uncertainty are woefully misinformed.
    Along with all this woefulness is the boneheaded notion that estimating is guessing and that decisions can actually be made in the presence of uncertainty without estimating.
    Here's why.
    When we are faced with a choice between multiple decisions, a choice between multiple outcomes, each is probabilistic. If it were not—that is, if we had 100% visibility into the consequences of our decision, the cost involved in making that decision, and the cost impact or benefit impact from that decision—it's no longer a decision. It's a choice to pick between several options based on something other than time, money, or benefit.
    Uncertainty comes in many forms: There are three levels to orchestration support for decision making:
    genba-1 Strategic, genba-2 tactical, genba-3 operational.

    Chief Data Officer (CDO) & Chief Analytics & Intelligence Officer (CAIO)
    A complex in interactions similar to the SIAR model in understanding information.
    WOW shop-floor
    A role of the CAIO is helping others ass central point in the middle.

    A complex in interactions similar to the SIAR model in understanding information.
    WOW shop-floor
    A role of the CDO is helping others undestanding eacht other as central point in the middle.

    Value stream understanding
    Process mining is reverse engineering the value stream (VSM). It is far too complicated to starting with process mining without understanding the VSM. From: "Want to do a process mining project" slides and videos (vdaalst).
    ❗ Get information of the expected possible flows, for example.
    Process mining W.vanAalst
    Several sequential process flows in a figure:
    See right side
    ❗ Get information of the desinged possible flows, for example.
    Process mining W.vanAalst
    Simplistic sequential and more complex event driven in a figure:
    See left side

    R-3.5.2 An extensive Data literacy framework (I)
    Darrel_Huff
    Ideate: data literacy
    A Dutch educator, from data to chocolate (PF Oosterbaan) the simplistic goal of data literacy. organised a session. The content goal of the educator is comparable to How to Lie with Statistics (Darrel Huff 1954). The book is a brief, breezy illustrated volume outlining the misuse of statistics and errors in the interpretation of statistics, and how errors create incorrect conclusions. The difference is the educator wants to educate getting into becoming data literate, create understandable results. The session resulted in an idea what is to communicate.
    😉 New is: an ordered flow for data literacy in six ordered steps. The result is that it becomes actionable interpretable. There is a logical underlying explanation by dependencies.
    The first stages: recognize, read, Understand, Analyse are conforming DIKW flow (pyramid): data, information knowledge, insight.
    Added are: The horizontal axis in the six stages, ordered, organized:
    Abstract Words-1
    What Recognize data for information data literacy RRU
    how Reading information
    where Understanding information
    Who Analysing - getting insight data literacy ACA
    When Communicating insight
    Which Act - implement change

    😉 Lean Agile is about avoiding overload and avoiding waste by overload. Continuous improvement with learning from all attempts at improvements is central as a culture.
    There is an exponential growth in data. The threat of being overloaded by information not knowing anymore what to do next. The complexity of what is going on by alle detailed information is another overload threat.
    💡👁❗ A data literacy structure: "Data driven decision making".

    preparations
    data literacy RRU stages:
    Information Read
    Data Recognize
    Insight Analyse
    The floorplan is the map of all 6*6 areas.
    What: Recognize data for information
    Bills of materials - theoretical plan:
    1. Recognize important sources for data
    2. Prioritize in important sources
    3. Important sources defined as required resource for systems
    Bills of materials - practical realisation:
    1. Important sources defined as required resource for systems
    2. Alignment to scales of measurements with operations
    3. Defined scales of measurements for resources as master data

    How: Reading information
    Functional Specs - theoretical plan:
    1. Understand the meaning of used scales for measurements
    2. Define the storing, archive and destroying for measurements
    3. Defined usage of measurements for resources as master data
    Functional Specs - practical realisation:
    1. Defined usage of measurements for resources as master data
    2. Implment Defined scales of measurements for resources as metrics
    3. Realisation of "data collectors" for resources in place and serviced

    Where: Understanding information
    Drawings Geometry - theoretical plan:
    1. Simple usage of the specified measurements
    2. Getting wisdom insight by interactions of multiple measurements
    3. Encourage a culture of feedback and adaptation
    Drawings Geometry - practical realisation:
    1. Encourage a culture of feedback and adaptation
    2. Analytics support for getting wisdom insight by measurement interactions
    3. Scheduled delivery for the simple specified measurements

    R-3.5.3 An extensive Data literacy framework (II)
    implementing & usage
    data literacy ACA stages:
    Analysing for insight
    Communicating insight
    Act - implement change
    The floorplan is the map of all 6*6 areas.
    Who: Analysing - getting insight
    Operating instructions - theoretical plan:
    1. Understand "Decision makers" needs
    2. Create value that meets "Decision makers" needs
    3. Ensure feedback loops with "Decision makers"
    Operating instructions - practical realisation:
    1. Ensure feedback loops with "Decision makers"
    2. Implement short delivery cycles for "Decision makers"
    3. Focus on "Decision makers" satisfaction and experience

    When: Communicating insight
    Timing diagrams - theoretical plan:
    1. Focus on optimizing the value flow
    2. Measure and analyze every step in the value stream
    3. Use information to make informed decisions
    Timing diagrams - practical realisation:
    1. Use information to make informed decisions
    2. Ensure a balance between speed and quality
    3. Eliminate bottlenecks and inefficiencies

    Which: Act - implement change
    Design objectives - theoretical plan:
    1. Promote transparency throughout the organization
    2. Cultivate a culture of trust and openness
    3. create a blame-free environment where people feel safe to make mistakes
    Design objectives - practical realisation
    1. create a blame-free environment where people feel safe to make mistakes
    2. Create shared visions for missions
    3. Value diversity in thinking with methodologies

    R-3.5.4 Retrospective lean: Data literacy, Genba-1
    What is Data literacy?
    Data literacy is the ability to read, understand, create, and communicate data as information. Much like literacy as a general concept, data literacy focuses on the competencies involved in working with data. It is, however, not similar to the ability to read text since it requires certain skills involving reading and understanding data.
    Data literacy refers to the ability to understand, interpret, critically evaluate, and effectively communicate data in context to inform decisions and drive action. It is not a technical skill but a fundamental capability for everyone, encompassing the skills and mindset necessary to transform raw data into meaningful insights and apply these insights within real-world scenarios.

    The data, information life cycle
    The six life cycle stages:
    1. Plan: what kind of information is needed
    2. Design: preparation with scale of measurements
    3. Realisation: create data collectors
    4. Manage: store, archive or destroy the information
    5. Usage: simple usage of the specified measurements
    6. Insight: Getting wisdom insight by interactions of multiple measurements

    Feedback loops play an integral role in customer service and business processes. Creating a feedback system involves several key steps to ensure that feedback is collected, analyzed and acted upon effectively.
    Data literacy is a requirement for able to work with measurements in closed loops. Improvement idea:
    💡👁❗ A data literacy structure: "Data driven decision making".
    At "control & command" (I-C6isr) for more what effective efficient results are possible.
    Confused-2

    R-3.6 Maturity 5: ICT solutions adding value

    Continuous improvement using BI&A, business intelligence & analytics for closed loops is the principle with lean.
    From the three ICT, ITC interrelated scopes: Only having a focus on technology will fail by missing what a strategy for an organisation is about.

    R-3.6.1 Mindset prerequisites
    Systems Philosophy
    Purposeful systems (book 1972), R.L. Ackoff was an American organizational theorist, consultant, and Anheuser-Busch Professor Emeritus of Management Science at the Wharton School, University of Pennsylvania. Ackoff was a pioneer in the field of operations research, systems thinking and management science.
    The influence on systems thinking:
    Any human-created systems can be characterized as "purposeful system" when its "members are also purposeful individuals who intentionally and collectively formulate objectives and are parts of larger purposeful systems".
    Other characteristics are:

    Levels in Lean Agile
    Levels in lean: How Many Genbas are There? (B.Emilani 2019). There is no link to te source anymore. He has chosen not to share the ideas anymore wiht te reason of personal financial profits being more important than improvements by sharing jigsaw parts knowledge.
    The world of leveled genba-s:
    My focus since the mid-1990s has been Genba 1 — the mind of leaders. For me, Genba 1 is the most interesting genba by far; specifically, leaders’ mindset, thinking, decision-making (including no decision), and actions (including no action). Genba 1 is the most challenging because it does not reveal the truth as easily as Genba 3 and Genba 2.
    In fact, Genba 1 actively seeks to conceal and subvert the truth, sometimes unknowingly. Genba 1 seems to be an unbreakable enigma, but the code can indeed be cracked.
    .. Simple causality informs us that Genba 1 is what allows Genba 3 and Genba 2 to happen or not happen. The fundamental problem that I have long pursued is: Genba 1 2 3
    The leveled genbas idea in a figure,
    see right side.
    There is lot of frustration behind those questions. A lot of valuable collected information is behind that. The split in the several levels gives a direction for what is going on:

    Going for lean, Genba-1
    Lean in a 3*3 plane is innovative idea. It is a result of combining the SIAR model to lean.
    WOW shop-floor
    The lean philosophy in a 9-plane figure,
    see right side.

    Not only:
    🎭 pillars, bars,
    🎭 or diagonals,
    🎭 edges - moderators
    🎭 but also repetitive clockwise cycles.
    Each pillar is bottom up from a novice to a master mind.
    ❶ ❷ ❸ The left pillar is what organisational leaders are supposed to do.
    ❹ ❻ The middle pillar is what a technical leader can do.
    ❼ ❽ ❾ The right pillar is where an advisor can help.
    In the middle the strategic goal and the threats. It are the organisational leaders enabling this, but where to start?
    1. Have the strategic goals translated in something measurable, goal: closed loops.
    2. Go around clockwise from the right bottom corner.
    3. Continous repeat the cycle holding of threats improving the set goals.

    Improvement idea:
    💡👁❗ An approach for going holistic lean agile at all genba levels.
    At "control & command" (I-C6isr) to do more investigation.
    R-3.6.2 An extensive DevOps framework (I)
    Ideate: Lean at devops
    😉 Lean and Agile are not about a goal of full automation. They are about automating repeatable and time-consuming tasks, so that teams can focus on the tasks with more added value. There should be always a balance between automation and human involvement. Continuous improvement with learning from all attempts at improvements is central as a culture.
    💡 A post triggered a short discussion. It resulted in an idea how to improve development and operations (DevOps), the information technology service. devops principles (Dasa 2024)
    Dasa scorecard
    These steps from Dasa are assuming full automation is the goal, that is not correct when wanting a lean culture.

    👐 The real attention point for that is: "culture".
    Development, engineering, architecting and operations, using, exploiting are two complementary human mindsets. Forcing exchange of people specialised in one of those two to do the other, is not respectful. In a cooperative team that is able to innovate you need both of them.
    What would the development line look like?
    preparations
    data literacy RRU stages:
    Customer Focus
    Processes & Tools
    Continous learning & improvements
    The floorplan is the map of all 6*6 areas.
    What: Customer Focus
    Bills of materials - theoretical plan:
    1. Understand customers needs
    2. Create value that meets customers needs
    3. Ensure feedback loops with customers
    Bills of materials - practical realisation:
    1. Ensure feedback loops with customers
    2. Implement short delivery cycles for customers
    3. Focus on customer satisfaction and experience

    How: Processes & Tools
    Functional Specs - theoretical plan:
    1. use lean principles to avoid the three evils: muda mura muri
    2. use tools that promote collaboration and integration (jabes build, test, operate)
    3. automate where it improves the efficiency of the whole
    Functional Specs - practical realisation:
    1. Automate where it improves the efficiency of the whole
    2. Minimize manual processes that are prone to errors
    3. Ensure continuous improvement of tools and workflows

    Where: Continous learning & improvements
    Drawings Geometry - theoretical plan:
    1. Encourage a culture of feedback and adaptation
    2. Use retrospectives to learn from mistakes
    3. Preferably implement small iterative changes
    Drawings Geometry - practical realisation:
    1. Preferably implement small iterative changes
    2. Ensure teams are continuously evolving their skills and knowledge
    3. Quickly implement changes and test new ideas (Jabes: suggestionbox, backlog)

    R-3.6.3 An extensive DevOps framework (II)
    implementing & usage
    data literacy ACA stages:
    Teamstructure
    Value stream management
    Culture
    The floorplan is the map of all 6*6 areas.
    Who: Teamstructure
    Operating instructions - theoretical plan:
    1. Create multidisciplinary teams with clear objectives
    2. Promote responsibility and autonomy
    3. Ensure collaboration beyond hierarchical lines
    Operating instructions - practical realisation:
    1. Ensure collaboration beyond hierarchical lines
    2. Encourage continuous: knowledge sharing, learning processes (jabes: specifications)
    3. Use self-organizing teams for flexibility

    When: Value stream management
    Timing diagrams - theoretical plan:
    1. Focus on optimizing the value flow
    2. Measure and analyze every step in the value stream
    3. Use information to make informed decisions
    Timing diagrams - practical realisation:
    1. Use information to make informed decisions
    2. Ensure a balance between speed and quality
    3. Eliminate bottlenecks and inefficiencies

    Which: Culture
    Design objectives - theoretical plan:
    1. Promote transparency throughout the organization
    2. Cultivate a culture of trust and openness
    3. create a blame-free environment where people feel safe to make mistakes
    Design objectives - practical realisation
    1. create a blame-free environment where people feel safe to make mistakes
    2. Create shared visions for missions
    3. Value diversity in thinking with methodologies

    R-3.6.4 Retrospective lean DevOps, Genba-3
    Information, Devops extended as Service
    This comprehensive approach provides a more robust and versatile framework for implementing DevOps within an organization. Each component is approached both strategically and operationally, so that sufficient attention is paid to every aspect of the organization and its transformation process. It can help organizations take a comprehensive view of DevOps that focuses on customer value, processes, teams, culture, and continuous learning.

    The new Devops
    The complete structure of related changes & improvements. Improvement idea:
    💡👁❗ A new organisational structure for: Information Service.
    At "control & command" (I-C6isr) to do more investigation.
    R-3.6.5 Following steps
    Missing link devops bpm devops bianl design bpm design sdlc design bianl
    These are practical sdlc experiences.

    Business Intelligence & Analytics 👓 previous
    bpm, business process next 👓 topic .



    Others are: concepts requirements: 👓
    BPM SDLC Bianl:

    🎯 ✅-GAP CI-SDLC CI-SAFE CI_Inform CI_Mdata CMM5-4IT 🎯
      
    🚧  👁-GAP F-SDLC F_SAFE F_Inform F_Mdata CMM3-4IT 🚧
      
    🔰 Contents E_SDLC  C_SAFE  E_Inform C_Mdata CMM0-4IT 🔰

    © 2012,2020,2024 J.A.Karman
    🎭 Summary & Indices Elucidation 👁 Foreword Vitae 🎭