The organisation powered by ICT in a ship like constellation.
The engines (data centre) out of sight below visibility.
Serving multiple customers (multi tenancy) for the best performance and the best profits on all layers.
There are six pillars in a functional and technical layer.
Within the the three internal pillars linked access is possible by an imagemap over the given figure.
When wanting going logical backward: 🔰 Too fast .. previous.
⚖ R-1.1.2 Guide reading this page
Technology vs Organisation - Gap
This page is about Technology. Technology is the enabler in a service providing role for missions of the organisation.
When a holistic approach for organisational missions and organisational improvements is wanted, the service by this technology pillar should be at some level. 💣 The first assumption with all fast technology evolvements is that: there are no technology impediments.
I might be true for technology itself but having found all those gaps it is terrible wrong for the service by technology.
What to do about this?
Acknowledge the gaps, issues in technology service.
This will be easily seen as blaming with negativity.
It is hurting the safe existing situation in what has always be done by people.
Getting a shared mind seat for need in improvement.
Letting in ideas for improvements from within the enterprise
Doing real the changes with the improvement goals.
🤔 The priorities are set by the enterprise, organisational missions not by technology.
The project triangle at Sofware Development life Cycle
Wanting artefacts deployed, ⬅ 💣 forgot the goal at production with the artefacts quality and their impact.
Wanting deployed into production, ⬅ 💣 forgetting to have selected verified well functional artefacts.
Wanting at production artefacts, ⬅ 💣 forgetting deployment, lifecycle.
the assembly line (applications) - services / products
Providing a safe environment that includes cyber security.
To get defined is a strategic holistic vision
The advisory role of a CSO Chief Safety officer
Information chain management is a gap to be solved.
In the information age the location of data, information is becoming disperse in space time is the other variation.
Manage the dependencies in the information chain, space synchronisation.
Manage effects in the information chain by changes, time synchronisation.
Usage of shared vocabulary supporting the goal of shared values, shared goals, collaboration in human respect.
A standard naming classification for artifacts
A translations for naming conventions into realisations
The Technology, Identities triangle for Business applications
Identities defined by technology, ⬅ 💣 forgetting the functionality goal in business applications.
Defining identities at applications, ⬅ 💣 failing to have generic technology supporting automated definitions.
Getting technology for applications, ⬅ 💣 forgetting how identities get their functionality in business applications.
Optimisation processes
😉 Required are closed loops for:
the operations, functioning for services - products
engineering, functionality for services - products
the defined purpose or services - products
possible purposes for services - products
Working into an approach for optimized business and technology situation, there is gap in knowledge and tools.
The proposal to solve those gaps is "Jabes". The idea for using a virtual shop-floor "Jabsa".
Changing the first failed one and rework on old pages and infographics.
2024 week 35
Web content redesign with Jabes being pivotal started for this page.
Adding the C6isr replacing data for stovepipes opened the next steps.
The goal for technology has become more clear by:
Knowing what the conceptual gaps in technology are
Insight needed improvements according Gemba, shop-floor
The old personal experiences moving in a vitae addendum.
2024 week 38
Continuing moving relocating old content focus to this page.
Collecting new more recent information.
2024 week 41
First 6 chapters (gaps) draft published and the third 6 chapters (adding value.
Form the 6 understanding two to complete later
Planning to do:
Chapters 2.4 and 2.5 to be finished later .
Old topic pages, evaluated now for C-Serve or r-serve (technology)
Content: images and pages to relocate.
Content: to rebuild in the new structure.
R-1.2 Lean Agile: SDLC challenge
Any Development Life Cycle also that of Software does have assumptions.
A well known staging standard:
Develop
Test
Acceptance
Production
This DTAP staging has many variants in words but the prinicples are the same.
The assumption is that this would be easy understood by everyone on what to do how to do and why it should be done.
⚖ R-1.2.2 The big elephant at SDLC misunderstandings
How to do SDLC
There are many issues. Root causes by misunderstandings, wrong perceptions on:
Analyses of Business Applications
Business applications
Infrastructure
🤔 Business intelligence, analytics is not associated with release management, just think of the trying to manage all those content in excel spreadsheets. ✅ Business applications are the usual context everybody is associating to SDLC.
These are usual monolithic and siloed with a missing design in business components and what infrastructure is used.
There are two types of business components:
Business information, data. The assets that are input and after processing the result.
Business rules, code. How transformations on information are expected to work by processing for the good bad and ugly.
➡ 💣 When this is not seen in the solution it is not about a business application. ❗ Infrastructure is the historical association for information technology.
A separation in concerns:
Platforms, tools, middleware, DBMS systems, ERP systems, file transfers, messaging, web traffic.
These have their own approach of release management because there are peculiar dependencies. One of the dependencies is continuity for business applications.
Operating systems the software component for enabling hardware systems. The hardware is more and more virtualised by software.
Network, adapters, line connections, routers, firewalls, segmentations for safety isolations.
Although this seems a logical separation of concerns, there are still a lot of misunderstandings and heated discussions.
A platform, dbms is for perspective of the operating system an "application" for a "business application" it is a required infrastructure component.
➡ 💣 When this kind of discussion is seen the expectation to achieve a compliant environment is a mission impossible.
Divide and conquer
😉 Divide a bigger issue into smaller ones to have each of them better understood. (Edsger W. Dijkstra EWD709)
Because we struggle with the small sizes of our heads as long as we exist, we need intellectual techniques that help us in mastering the complexity we are faced with:
separation of concerns
effective use of abstraction
devising appropriate concepts
When faced with an existing design, you can apply them as a checklist; when designing yourself, they provide you with strong heuristic guidance.
In my experience they make the goal "intellectually manageable" sufficiently precise to be actually helpful, in a degree that ranges from "very" to "extremely so".
Example missing the SDLC goal and artefacts
😲 The idea of putting a data mining, AI, machine learning (ML) project into git for versioning, will cause big issues.
To get aware of: operational data, training - validation, has the role of source code. There is a goal using some methods to fit the line.
A ML project uses real operational information. The size easily grows above many Gb's, 60Gb and more being smal.
SDLC tools assumptions:
A sizing limit with version-tools like git is based on 3-gl languages.
Detecting code differences is based on a 3-gl source getting compiled.
A well known 3-gl example is "C" used for maintaining the linux operation system.
Theses assumption are usually a mismatch in SDLC projects. Bitbucket from Atlassian
has repository limit of 1 Gb for the total of all historical versions.
⚖ R-1.2.2 Legal SDLC obligations
Regulatory Guidelines
😲 These are well documented by several sources at a high and medium abstraction level.
However there is no real pressure for organisations to have those compliant well done.
A change is possible coming by new regulations requiring to follow those, Nis2 and Dora (EU).
❗ What is not there: detailed technical implementation instructions with checklists.
❗ What is not place: regulatory audits with corrective controls.
There is intangible safety connection, see: Legal Safety obligations
The environment should be well secured. Safety measures against accidental (mistakes) or intended (hack) destruction.
Those guidelines, mentioned in regulations, are clear.
The intention is to know which software version of each type was used on a moment in time in the production environment.
Versioning in development is not mentioned. ❶Protection, safety.
❷Versions used in prodcution at "12.5.1 Installation of software on operational systems":
❸Information Quality, impact.
⚖ R-1.2.3 Releases, SDLC, the technology mindset
Classic centralised life cycle model DTAP
❶The classic approach is having clear stages for relases with their versions and archiving.
Master maintenance
Emergency fixes
New system version(s)
❷ The methodology assumes there is a shared development environment the developers are cooperative working in. ❸ The software library is the central location having all possible future, current, and previous versions of production software.
The benevolente dictator
❿In the git reference.
Assumptions are the developer is also the operator for his environment. Low level shell commands knowledge is expected. ❶ Using any releasement tool is always requiring an ultimate approver.
git terminology: dictator. ❷ All artifacts are verified on incremental changes when they are merged, there will be blocking issues when something gets out of order. ❸💣 What is is missing: a simple connection to well defined fucntional acceptance validation.
The distributed life cycle model idea
❹ Git uses a local repository personalised locations. ❺ The local files are the daily working activities. ❻ Central managed repository: synchronised to local. Git was made in 2005, 64Kb modems the norm.
💣 The local methodology assumes a shared development environment for cooperative working is not possible.
Git life cycle model DTAP
❼ Nvie is often referred as a succesfull approach using git.
The picture s Git from nvie (2010). In principle the same structure as the central classic.
Master maintenance
Emergency fixes
New system version(s) - features (develop - )
❽ The focus is at developing software (3gl).
What is not there:
Release management and archiving in the central repository.
Requirements Solving anything for: versioning at production.
Supporting other kinds of artifacts, allowing parallel development for information flows.
❾ Enabling AI, ML (Machine Learning) tools in information flows goes by using multiple stations.
To segregate dedicated stations:
1. Retrieving information, traceable auditable.
2. Preparing information for transformations.
..
4. Validated new information for delivery.
..
..
3. Transforming information, traceable auditable to new one(s).
..
It is confusing in understanding what and how to manage many steps are repeated in another context with the same words or using other words for the same context.
Classic life cycle integrating with recent tools
Example: complete service doing release management. ❿ Following what is asked for technology once there was a blog on integrating technologies. CA, Computer associates was the owning company.
Notice the release management flow by lines of the master, parallel development, an emergency fix to bring a release version into production.
It is the developer laptop that is positioned to be the starting point.
Another tool, Jenkins, brought in for doing tailored scripted (development) packaging.
💣 The focus on the goal to achieve for the organisation, release management, is missing.
It is about tools, personal develoeprs preferences.
Some Endevor documentation is still available.
CA Endevor SCM and
Git intergration
R-1.3 Lean Agile: Safety perspectives
What is wrong with the safety for a platforms, safety for applications, safety on sensitive information?
What should be: (soll)
Organisation: Safety Accountability
Chief Product Owner (CPO) role ➡ safety
Technology safety insight support as service
💣 However there are many issues. Root causes by misunderstandings, wrong perceptions.
The usual for safety is a focus on technology.
Needed is a mindset for organisational risk & impact.
⚖ R-1.3.1 The safety challenge, cyber administrative
How to review and implement Safety
There are many issues. Root causes by misunderstandings, wrong perceptions on:
Analyses of Business Applications
Business applications
Infrastructure
🤔 Business intelligence, analytics is not associated with safety, just think of the trying to manage all those content in excel spreadsheets. ✅ Business applications are the usual context everybody is associating to safety, cyber security.
These are usual monolithic and siloed with a missing design in business components and what infrastructure is used.
There are two types of business components:
Business information, data. The assets that are input and after processing the result.
Business rules, code. How transformations on information are expected to work by processing for the good bad and ugly.
➡ 💣 When this is not seen in the solution it is not about a business application. ❗ Infrastructure is the historical association for information technology.
A separation in concerns:
Platforms, tools, middleware, DBMS systems, ERP systems, file transfers, messaging, web traffic.
These have their own approach of release management because there are peculiar dependencies. One of the dependencies is continuity for business applications.
Operating systems the software component for enabling hardware systems. The hardware is more and more virtualised by software.
Network, adapters, line connections, routers, firewalls, segmentations for safety isolations.
Although this seems a logical separation of concerns, there are still a lot of misunderstandings and heated discussions.
A platform, dbms is for perspective of the operating system an "application" for a "business application" it is a required infrastructure component.
➡ 💣 When this kind of discussion is seen the expectation to achieve a compliant environment is a mission impossible.
Divide and conquer
😉 Divide a bigger issue into smaller ones to have each of them better understood.
Release management and Safety are closely tied in the core information flows (operational plan) of the organisation.
Analysing the core organisational information used in analytical planes are posing an additional challenge for the safety quest.
Release management: Changes in operational production software whether at the infrastructure or at the business layer should be well controlled.
Exchange of business information should be well controlled
Information systems should be well controlled
Accepting platforms externally prescribed for usage and safety while the organisation is accountable is a weird additional issue.
Guidelines Administrator roles
❗ Systems services are classified as "high privileged". Security should set by the principle of least privileges. ❗ Administrator roles are classified as "high privileged". Security should set by the principle of least privileges.
An example of a guideline clearly stating what should be done.
Indispensable baseline security requirements (Enisa, procurement: secure ICT products and services 2016 )
The provider shall design and pre-configure the product according to the least privilege principle, whereby administrative rights are only used when absolutely necessary, sessions are technically separated and all accounts will be manageable. 😲 The usual idea at customers is this would be in place by suppliers without any validation it was done conform guidelines.
The result is a lot of frustration at the orgnsiation security staff not understanding why access right for DevOps are left unnecessary wide open.
⚖ R-1.3.2 Legal Safety obligations
Regulatory Guidelines
😲 These are well documented by several sources at a high and medium abstraction level.
However there is no real pressure for organisations to have those compliant well done.
A change is possible coming by new regulations requiring to follow those, Nis2 and Dora (EU).
❗ What is not there: detailed technical implementation instructions with checklists.
❗ What is not place: regulatory audits with corrective controls.
There is intangible connection with release management, see: Legal SDLC obligations
The environment should be well secured. Safety measures against accidental (mistakes) or intended (hack) destruction.
Those guidelines, mentioned in regulations, are clear.
PIM Privileged identity management, the intention is to know who did some critical administrative action at what moment in the production environment.
⚖ R-1.3.3 Safety, cyber, the technology mindset
❿Securing resources relations is part of the relase train.
It are technical resources associated to privileged roles. Safety cyber security, multi tenancy must be in place for information systems.
The organisation, product manager, should be in lead for:
❶ Having PIM organised well in a secure lean way is not easy.
Just getting tools won´t help for fundamental issues.
❷ Having release management well organised in place in a secure lean way.
❸ Having fall back scenarios in place for when things are going badly wrong.
A logical framework for data management -connections
❹ What is missing: a framework for data connections used with defined localisation for the specific situation in an organisation.
The goal is simple: No uncontrolled interventions of business information between the DT, A and P environments.
In a figure:
❺ Any authorisation model (security) made effective at Test should be conform the intended Production.
Test, should be as similar as possible conform the intended Production situation.
Data connections simulated or active conform the intended Production. ❻Details on what is needed more for safety depends on used technologies.
Questions on how to make the safety life cycle lean is by removing constraints.
Steps in responding for improving safety:
Never do dirty & quick solutions connecting data pipelines or delivering solutions.
Have a plan what to do first, what is most important, and what is to solve later.
Misunderstanding ICT - business lines, shadow ICT
Finance Business - Marketing, sales, customer relations departments are getting the freedom to do their own ICT.
Financial reporting is inside information but is sensitive at moments other persons should not know about it.
This started with the SOX regulation. ❼ Self service ICT, Cloud services, shadow ICT is often seen as succesful to overcome misunderstandings, but with compliancy consequences: missing almost all required policies.
A culture of everyone having their own machines
🤔 Doing all ICT work on a single or limited number of machines requires good management for access and resource usage on all shared resources.
Cloud native benefits from using shared resources.
The complexity of the good management is assumed to be solved by the supplier. 🤔 The alternative is having dedicated machines for all of the components of any business application.
The complexity in this one is good management of involed interactions between the components in business applications aside those that passing over the businees applications. ❽ To choose which of these complexity challenges to get confronted at what level:
Interactions within machines for tenants.
Interactions over a lot of machines for all tenants.
Grouping in segements of business application types.
The Service Oriented Architecture: SOA, API-s
In complex environments with many interactions each doing partial actions for the Business, connections are needed for: exchanging data, information, by messages and/or bulk.
The complexity of information versions at intereactions is the disadvantage, resulted in an aversion for SOA and the service bus (2010). ❾ Safety attention points:
Securing system services: OS, network, infrastructure at rest, in use & change activity.
Securing middelware tools, platforms: infrastructure at rest, in use & change activity.
Securing business logic: at rest, in use and change activity, release management.
Securing business data information: at rest, in use and for connections interactions.
Securing business processes monitoring analyses at rest, in use and in transit.
The common confusion is not seesing all these components as topics on their own and on top holistic.
Classic life cycle safety technology focus
❿ Soc, Security operations Center and Computer Operations once started with RACF, SMF System performance, system sizing.
If you want to perform library change analysis, you also need CKFREEZE data sets with checksum information.
An ancient figure of the 80´s.
Collecting all lind of resources.
Logs are shared for different goals.
The way of working in principle did not change. The modern product:
IBM QRadar Suite
is a modernized threat detection and response solution designed to unify the security analyst experience and accelerate their speed across the full incident lifecycle.
The portfolio is embedded with enterprise-grade AI and automation to dramatically increase analyst productivity, helping resource-strained security teams work more effectively across core technologies.
Integrated products for: Endpoint security (EDR, MDR), SIEM, SOAR.
(2024)
R-1.4 Information processing functionality
In the beginning value streams were managed by humans at the floor.
At some moment this capability got lost.
To achieve by transformations:
Operational plane: processing information set by missions, visions, in control for the good bad and ugly.
Analytical plane: improving the operations, visions.
Research using information set by missions, visions.
The association in this recovering what once was done well although hardly noticed.
⚒ R-1.4.1 Information what is it about?
The information factory
The mindset for a circular flow, using a value stream must always have been in my mind.
The operational plane, similar to a factory:
value stream goes from left to right (top).
demand, request goes right to left (bottom).
Pull: 0 ,1,2,3
Demand request
Push: 4,5,6,7,8,9
Delivery result
Value stream materials: Left to right
See right side:
The analytical plane: similar to a factory.
Documentation traceable verifiable: executed at each materialized stage.
(S-W-N-E, 06:00 - 09:00 = 00:00 - 03:00)
Documentation traceable verifiable: executed at each process stage.
(SE-SW-NW-NE, 04:30 - 07:30 = 10:30 - 01:30)
consolidated information on performance pushed to a central point.
The EDWH 3.0 Logistics as basic central pattern.
Having a inbound area the validation of goods, information, is done.
At the manufacturing side, outbound, are the internal organisational consumers.
Note: Not only for a dashboard to be used by managers but all kind of consumers including operational lines are covered.
The two vertical lines are managing who has access to what kind of data, authorized by data owner, registered data consumers, monitored and controlled.
The confidentiality and integrity steps are not bypassed with JIT (lambda).
What is coming in, is expected and valid by administration purchases.
What is coming in, has an internal responsible party with a budget for storage.
What is going out, is delivered to authorised consumers.
What is going out, has an internal responsible party with a budget for delivery.
The word datacontracts is applicable for this.
It is not something being reserved for only reporting purposes (BI AI).
💣 The EDW 3.0 is holistic at enterprise level, it covers operational value stream and others controllng what is coming in and what is going out.
This is a disruptive not usual viewpoint
ETL ELT classic decoupling in the modern times
ETL vs ELT: Decoupling ETL
Traditional ETL might be considered a bottleneck, but that doesn't mean it's invaluable.
The same basic challenges that ETL tools and processes were designed to solve still exist, even if many of the surrounding factors have changed.
For example, at a fundamental level, organizations still need to extract (E) data from legacy systems and load (L) it into their data lake.
And they still need to transform (T) that data for use in analytics projects."ETL" work needs to get done but what can change is the order in which it is achieved and new technologies that can support this work.
ELT is recurrent when used technology changes, SMED Single-Minute Exchange of Die, is a Lean tool used in manufacturing to reduce equipment changeover time.
ETL has an intensive preparation transformation. An AI, machine learning, processes is a modern option for his.
💣 ELT ETL as standard patterns is a disruptive not usual viewpoint 🚧 Retrospectives and corrective actions are needed. 👐 Goals, criteria: solving the real bottlenecks hampering the organisation.
⚒ R-1.4.2 Operational plane - value streams
The ER-star diagram - predictable planned process
Ramping up the administrative information line, an administrative process, is similar to the physical material flow.
A reason to process something (trigger) is starting the value chain.
The pull & push:
Pull kanban: the request with all necessary preparations and validations. Part 1: Preparation
Designing a kanban system on paper is much easier than implementing it on the shop floor.
Push delivery: with all necessary quality checks tracked by a kanban.
How do you organize the waiting of the kanban for processing? It should be a first-in, first-out system, ..
ramp-up-2
Now that all material is “kanbanized,” you have to reduce material. ....
Overall, this debugging process will also help you with the "check" and "act" of the PDCA sequence.
If you do this debugging, you will learn if the system actually works and if it is (hopefully) better than what you had before.
Don´t take it for granted that just because you changed something, it must be better than before!
The ER-star diagram (Entity Relationship)
Grouping features around an element is the standard for operational information processing.
Normalising the information structure has the goal of avoiding any duplicates so there is a certainty on what the correct version of stored information is.
Third normal form
Advantages and disadvantages are: ❶ Complicated data models & access plans. ❷ Standard well known methodologies. ❸ Required: decoupling for operational-analytical. ❹ Consuming information requires a transformation.
Chain Historicals: Operational Plane
There are a lot of differences it is more an ER-model similar to the dwh starschema.
🚧 Retrospectives and corrective actions are needed. 👐 Goals, criteria: solving the real bottlenecks hampering the organisation.
⚒ R-1.4.3 Analytical Plane - improving flows
Becoming data driven, agile thinking 💡
In a data driven approach there is cycle around the engineer data analist. Decoupling ETL
A clean separation between data movement and data preparation also comes with its own specific benefits:
Less friction. The person or process loading the data isn't responsible for transforming it to spec at load time.
Postponing transformation until after data is loaded creates incentive for sourcing and sharing data.
More control. Loading data into a shared repository enables IT to manage all of an organization´s data under a single API and authorization framework.
At least at the granularity of files, there is a single point of control.
Data driven, machine learning, AI.
The labelling of data is indicating of a denormalized tabular format is the driver with Machine learning.
The high normalised data approach is abandoned.
Data preparation for machine learning still requires humans
Most enterprise data is not ready to be used by machine learning applications and requires significant effort in preparation. ...
For supervised machine learning to work, the algorithms need to be trained on data that has been labelled with whatever information the model needs.
A disruption with the modelling for operational information, avoiding duplications.
Analyse culture: exploratory framework
A lost figure but nicely showing what should be done.
❶ Complicated: the fundamental question is what is needed for who for decisions.
Other complications: ❷ Sense making og collected sources, which ones? ❸ supporting decisions at what level of certainty? ❹ supporting decisions at what time horizon?
🚧 Retrospectives and corrective actions are needed. 👐 Goals, criteria: solving the real bottlenecks hampering the organisation.
⚒ R-1.4.4 Research using information
Research is more time consuming and requiring a lot of human intelligence, interactions.
Operational environments: focussing on real-time interactive updates.
Relative small number of updates that should perform fast.
Analytical environments: long running time consuming analyses processing in bulk.
The result can be a report or a model (code, logic).
Research for what is not known is far more unpredictable.
The information for that moving externally to the organisational.
This one is very likely a root cause for confusing in analytics, resarch.
Understanding the information for all involved customers and other interactions was the research area for marketing.
Some examples:
Predicting a churn-rate, that is customers leaving for dissatisfaction.
Trying to do cross selling, more to existing customers and predicting the future.
Predicting trustworthiness of existing customers.
This is not unusual to do and part of normal business. When doen too excessive and without controls there are issues.
💣 Adding a research type decoupled from the analytcial plane, is not a common viewpoint 🚧 Retrospectives and corrective actions are needed. 👐 Goals, criteria: solving the real bottlenecks hampering the organisation.
R-1.5 Master Data: Communication Cooperation
Data governance, a shared glossary, at information processing is avoided.
The idea is the base for knowledge understanding is just a cost factor.
In fact the enabler for:
multidisciplinary collaboration.
cross hierarchy collaboration.
transparency throughout the organisation.
Changing the avoidance for data management requires becoming aware what went wrong.
⚒ R-1.5.1 Changing the wrong thing: "developer problem"
Ambiguity, the wrong problem
"twice the work in half the time." is a well known dogma for creating faster and code. It
appealed to execs unhappy
with what they were getting for their tech spend. (Mike Goitain 2024)
Scrum was never going to work in many orgs because they didn't have a "developer problem" to fix in the first place.
We got stuck treating superficial symptoms instead of the underlying causes.
The real problems lay in the flawed legacy mental model that:
Pitted “The Business” against “IT”
Didn't factor customer needs into choosing which problems to solve
Never figured out how to effectively solve client problems
Put in place structures, governance, & processes that optimized for efficiency
💣 but killed effectiveness
Fixing this will require:
Business & IT to collaborate, working from a clear set of client-centric strategic choices
Redesigning structures, governance, & processes for effectiveness
😉 👉🏾 Putting capable product managers in place to lead the product trio in continuous discovery and delivery
To solve client problems in ways that also deliver business value
Ambiguity, Misunderstanding by words
In the context of information retrieval
a thesaurus is a controlled vocabulary that seeks to dictate semantic manifestations of metadata in the indexing of content objects.
A thesaurus serves to minimise semantic ambiguity by ensuring uniformity and consistency in the storage and retrieval of the manifestations of content objects.
Composed by at least three elements:
1-a list of words (or terms)
2-the relationship amongst the words (or terms), indicated by their hierarchical relative position (e.g. parent/broader term; child/narrower term, synonym, etc.)
3-a set of rules on how to use the thesaurus.
There are standards for this, iso 25964. 🚧 Retrospectives and corrective actions are needed. 👐 Goals, criteria: solving the real bottlenecks hampering the organisation.
⚒ R-1.5.2 Focus on the wrong thing: "developer interests"
Tools - in this case programming languages is a mindset lock in by technology. Even than the most modern of today is not the same tomorrow or is it?
Rethinking the
"Real" Programming Languages: A Look at the Most-Used Technologies Today (Fernando Ferrer 2024)
In the ever-evolving landscape of technology, the tools and languages we choose to learn and use are often driven by trends, peer opinions, and a constantly shifting set of industry best practices.
Yet, as we look at the current state of the programming world, a fascinating contradiction has emerged: the languages once dismissed as “not real” or “too slow” are now at the forefront of technology’s most significant advancements.
The Rise of Python and SQL: From Underdogs to Essential Skills
A decade ago, conversations among developers often echoed a common sentiment: SQL was merely a query language, unworthy of being considered alongside its more robust and versatile counterparts, and Python was labeled as a slow, toy language, suitable only for small scripts and academic exercises.
Fast forward to today, and both SQL and Python have cemented their positions as two of the most indispensable languages in the modern tech ecosystem.
SQL: The Backbone of Data Management
SQL, the Structured Query Language, has transcended its origins as a simple tool for querying databases.
It is now the backbone of data management and analysis across industries. In an era where data is king, the ability to efficiently extract, manipulate, and analyze large datasets is crucial.
SQL’s declarative syntax and powerful query capabilities make it the go-to language for data professionals.
More than 60% of data analysts, data scientists, and business intelligence professionals use SQL daily to interact with data warehouses, run complex analyses, and support decision-making processes.
Despite its straightforward appearance, SQL’s power and flexibility are unparalleled, enabling everything from simple data retrieval to intricate transformations that fuel the insights driving business strategies.
Mentioning SQL as powerfull elminiates the age of tool argument. For that the not seen as modern Cobol language is a similar one for discussions.
Python: The Language of Automation, Data, and Beyond
Python's journey from a perceived “slow” language to one of the most popular programming languages worldwide is nothing short of remarkable.
Today, Python’s versatility makes it a favorite among developers, data scientists, and machine learning practitioners.
Its simplicity and readability lower the barrier to entry for beginners while offering robust libraries and frameworks for advanced users.
In the fields of data science and machine learning, Python is the undisputed leader.
Libraries such as Pandas, NumPy, and Scikit-Learn have made it the language of choice for data manipulation, statistical analysis, and algorithm development.
Meanwhile, frameworks like Django and Flask have empowered web developers to build scalable applications quickly.
Despite criticisms of its speed, Python’s flexibility and extensive ecosystem have enabled it to dominate areas where productivity and ease of use are more critical than raw performance.
The "Not-So-Real" Languages Leading the Pack
Interestingly, when we look at the list of the most-used programming languages today, the top two spots are occupied by JavaScript and HTML.
These are the very technologies often dismissed by purists as “not real programming languages.”
JavaScript: The King of the Web
JavaScript’s evolution from a simple scripting language for web browsers to a full-fledged, multi-paradigm programming language has been nothing short of transformative.
Once regarded as a tool for adding trivial interactions to websites, JavaScript is now the cornerstone of modern web development, powering everything from front-end frameworks like React and Angular to server-side environments like Node.js.
JavaScript’s ubiquity and versatility have made it the most popular language among developers.
It is the language of the web, used in everything from building interactive websites and web applications to creating server-side logic and even mobile apps through frameworks like React Native.
HTML: The Language that Shapes the Web
HTML, or HyperText Markup Language, is another so-called “not-real” language that dominates the development landscape.
While it may lack the complexity of traditional programming languages, HTML is fundamental to the structure and presentation of web content.
It forms the skeleton of every web page, defining the structure, layout, and elements that users interact with.
Without HTML, the web as we know it would not exist. Its simplicity is its strength, allowing developers to create accessible, well-structured content that can be rendered across devices and platforms.
What This Means for Developers
The lesson here is clear: what makes a programming language "real" or "valuable" is not its complexity or speed but its utility in solving problems.
The most used programming languages today are not necessarily the ones considered the most powerful or efficient; they are the ones that enable developers to build solutions, extract insights, and create value.
As technology professionals, we must move beyond the notion of what is considered a “real” programming language and focus on the practical applications and impact of these tools.
👉🏾 Learning SQL or Python might not make you a hardcore systems programmer, but it will make you an invaluable asset in a world that increasingly relies on data-driven decision-making and automation.
👉🏾 Similarly, dismissing JavaScript and HTML as mere scripting tools overlooks their central role in shaping the web.
The ability to create interactive, dynamic, and user-friendly web experiences is essential in a digital world where first impressions are often made online.
Looking Forward
In conclusion, the languages we use are tools, and like any tool, their value lies in how effectively they help us solve problems.
SQL, Python, JavaScript, and HTML have proven their worth in diverse contexts, from data analysis and automation to web development and user experience design.
As the tech landscape continues to evolve, so too will the languages and tools we rely on.
The key for developers is not to be bound by preconceived notions of what constitutes a “real” language but to remain adaptable, open-minded, and focused on solving real-world problems with the best tools available.
After all, the most valuable programming languages are not those that conform to arbitrary definitions of legitimacy but those that get the job done. 🚧 Retrospectives and corrective actions are needed. 👐 Goals, criteria: solving the real bottlenecks hampering the organisation.
⚒ R-1.5.3 Missed: Why Systems engineering
At another level is rethinking services at the system level. Rethinking systems to improve.
What Can We Learn From Systems Engineering? (Glen Alleman 2024)
The Lean Aerospace Initiative and the Lean Aerospace Initiative Consortium define processes applicable in many domains for applying lean.
At first glance, there is no natural connection between Lean and System Engineering. The ideas below are from a paper I gave at a Lean conference.
👉🏾 Key Takeaways
Lean and Systems engineering are cousins.
All but trivial projects are systems, and many are systems of systems.
Thinking like a systems engineer is the basis of implementing Lean processes.
Thinking without systems does little to add sustaining value to any process improvement.
Product development is a value stream process, but how the components interact at the technical, business, financial, and operational levels is a systems engineering process.
Lean itself does not possess the vocabulary to speak to these system's complexity issues [1]
Core Concepts of Systems Engineering
Capture and understand the requirements for Capabilities assessed through Measures of Effectiveness (MOE) and Measures of Performance (MOP).
Could you ensure requirements are consistent with what is predicted to be possible in a solution in these MOEs and MPs?
Treat goals as desired characteristics for what may not be possible.
Define the MOE, MOP, goals, and solutions for the project's whole lifecycle in units meaningful to the buyer.
Could you distinguish between the statement of the problem and the description of the solution?
Could you identify descriptions of alternative solutions?
Develop descriptions of the solution.
Baseline each statement of the problem and the statement of the solution.
Except for simple problems, develop a logical solution description.
Be prepared to iterate in design to drive up effectiveness.
Base the solution of evaluating its effectiveness in units of measure meaningful to the buyer.
Independently verify all work products.
Validate all work products from the perspective of the stakeholders.
Management needs to plan and implement effective and efficient transformation of requirements and goals into a solution description.
Typical System Engineering Activities
Technical management
System design
Product realization
Product control, Process control
Technical analysis and evaluation
Post-implementation support
Steps to Lean Thinking [2]
Specify value
Identify value stream
Make value flow continuously
Let customers pull value Pursue perfection
Differences and Similarities between Lean and Systems Engineering
Both emerged from practice. Only later were the principles and theories codified.
Both have focused on different phases of the product lifecycle.
SE is generally focused on product development and more focused on planning.
Lean is generally focused on product production and more focused on empirical action.
Unlike Lean, SE focuses less on quality, except for Integrated Product and Product Development (IPPD).
Despite these differences and similarities, both Lean and Systems Engineering are focused on the same objective: delivering products or lifecycle value to the stakeholders.
The lifecycle value drives both paradigms and must drive any other process paradigm associated with Lean and Systems Engineering, including paradigms like software development, project management, and the very notion of agile.
A critical understanding often missed is that Lifecycle Value includes the cost of delivering that value.
Value can't be determined in the absence of knowing the cost. ROI and Microeconomics of decision making require both variables to be used to make decisions.
👉🏾 What do we mean by lifecycle?
Generally, lifecycle combines product performance, quality, cost, and fulfillment of the buyer's needed capabilities.[3]
Lean and Systems Engineering share this common goal—the more complex the system, the more contribution there is from Lean and SE.
👉🏾 Putting Lean and Systems Engineering Together on Real Projects
First, some success factors in complex projects [4]
Dedicated and stable interdisciplinary teams
Use of prototypes and models to generate tradeoffs
Prioritizing product features
Engagement with senior management and customers at every point in the project
Some form of high-performing front-end decision process that reduces the instability of key inputs and improves the flow of work throughout the product lifecycle.
This last success factor is core to any complex environment, no matter the process.
Without stability of requirements and funding, improvements to workflow are constrained.
Adapting to changing requirements is not the same as making the requirements—and the associated funding—unstable.
Mapping the Value Stream to the work process requires some level of stability. Systems Engineering, as a paradigm, adds measurable value to any Lean initiative by searching for this stability.
The standardization and commonality of processes across complex systems are the basis for this value. [5]
👉🏾 Conclusions
Lean and SE are two sides of the same coin regarding creating value for the stakeholder.
Lean and SE complement each other during different project phases – ideation, product trades for SE, and production waste removal for Lean anchor both ends of the spectrum of improvement opportunities.
Value stream thinking makes the paths to transition to a Lean paradigm visible while maintaining the systems engineering principles. [6]
The result is the combination of Speed and Robustness – systems are easily adaptable to change while maintaining fewer surprises, using leading indicators to make decisions, and decreasing sensitivity to production and use variables.
[1] "The Lean Enterprise – A Management Philosophy at Lockheed Martin," Joyce and Schechter, Defense Acquisition Review Journal, 2004.
[2] Lean Thinking, Womack and Jones, Simon and Schuster, 1996
[3] Lean Enterprise Value: Insights from MIT's Lean Aerospace Initiative, Murman, et al.l, Palgrave 2002.
[4] "Lean Systems Engineering: Research Initiatives in Support of a New Paradigm," Rebentisch, Rhodes, and Murman, Conference on Systems Engineering, April 2004.
[5] LM21 Best Practices, Jack Hugus, National Security Studies, Louis A. Bantle Symposium, Syracuse University Maxwell School, October 1999
[6] "Enterprise Transition to Lean Roadmap," MIT Lean Aerospace Initiative, 2004 Plenary Conference.
🚧 Retrospectives and corrective actions are needed. 👐 Goals, criteria: solving the real bottlenecks hampering the organisation.
R-1.6 Maturity 3: ICT service impact understood
From the three ICT, ITC interrelated scopes:
❌ I - processes & information
✅ T - Tools, Infrastructure
❌ C - Organization optimization
Only having the focus on IT4IT, getting a mature Life Cycle Management (LCM) requires understanding an acknowledgment of the layered structure.
Each layer has his own dedicated characteristics.
⚖ R-1.6.1 Historical lockin release management
The beginning of release management
Mainframe usage: there were several approaches in use, not a single one covering everything.
Those were:
External software, operating system (OS). Installed using their tools SMP/E (IBM). SMP/E: System modification program. Very complicated for additional scripting.
Tools middleware using their own tools and instructions. Complicated for additional scripting and interactions to the infrastructure (OS).
In house build business applications using a 3GL like Cobol and a DBMS. Batch load: Complicated for additional scripting adapting situations.
In house build business applications using an integrated data dictionary (IDD). Interactive load: Complicated for additional scripting adapting situations.
It are the same challenges with just some other technology these days.
😱 ❌ Personal experience, reviewed tool: (Panvalet). It got rejected for having too little additional value. In house scripting more efficient, effectieve, more reliable.
Computer Associates Panvalet
(also known as CA-Panvalet) is a revision control and source code management system for mainframe computers such as the IBM System z and IBM System/370 running the z/OS and z/VSE operating systems.
😱 💰 Personal experience, Endevor, the last mainframe only tool.
It was a sad experience because management forced to have an external automated tool for the complicated situations but not realising the impact, involved cost.
Endevor is a source code management and release management tool for mainframe computers running z/OS. It is part of a family of administration tools by, which is used to maintain software applications and track their versions.
👁❗ Focus on external opinions, external technoglogy is the same, seen everywhere.
Failing in qualtity, time in deliveries and cost.
The project triangle at Sofware Development life Cycles
Wanting artefacts deployed, ⬅ 💣 forgot the goal at production with the artefacts quality (not good).
Wanting deployed into production, ⬅ 💣 forgetting to have selected verified well functional artefacts (not cheap).
Wanting at production artefacts, ⬅ 💣 forgetting deployment (not fast).
CI Continuous Integratation / CD Continous Delivery
What is the idea for CI / CD a tecnology goal where the business goal is not mentioned.
A post by Sten Pittet (describing what is going on: Atlassian
CI and CD are two acronyms that are often mentioned when people talk about modern development practices.
CI is straightforward and stands for continuous integration, a practice that focuses on making preparing a release easier.
But CD can either mean continuous delivery or continuous deployment, and while those two practices have a lot in common,
they also have a significant difference that can have critical consequences for a business.
Developers practicing continuous integration merge their changes back to the main branch as often as possible.
The developers changes are validated by creating a build and running automated tests against the build.
By doing so, you avoid the integration hell that usually happens when people wait for release day to merge their changes into the release branch. 👁❗ Wait ..., by avoiding a shared development environment there is a complete technology introduced with external technology for achieving a shared development environment.
That sounds to be a lot of waste is being introduced.
Continuous integration puts a great emphasis on testing automation to check that the application is not broken whenever new commits are integrated into the main branch. 👁❗ There is more: getting to that shared environment, the result of CI, there is no notion how to do quality testing: program/component test, integration test, acceptance test.
That is not conforming legal and or compliance requirements mentioning the level of testing with organisational mission goals.
Continuous delivery is an extension of continuous integration to make sure that you can release new changes to your customers quickly in a sustainable way.
This means that on top of having automated your testing, you also have automated your release process and you can deploy your application at any point of time by clicking on a button.
👁❗ Why is the business not involved in acceptance and should wait to see it when it has deployed into production?
Many business applications have an agreed release date for business with good reasons. 👁❗ Just adding some products in a web shop should not done by changing logic but by changing business data.
⚖ R-1.6.2 Historical lockin safety management
In house build safety API-s to centralised systems
In the 80-s information technology was very new. In that era there were no centralised security systems or with very limited functionality.
The best next approach were in house build dedicated API-s for safety cyber security.
The first change was removing hard coded logic from business applications in favor of commercial tools that got available.
🤔 The first commercial tools: ACF2 hierachical, RACF groups, AD LDAP.
ACF2 does an hierachical approach using the uid-string.
Manage the user identification string, uid
A well-designed UID can eliminate the need for secondary authorization IDs when they are used to group IDs for resource access, and can ensure that individual accountability is retained and performance is increased.
A UID takes advantage of masking, which lets you represent multiple characters with a single character.
This feature eliminates the need to write multiple rules to cover similar users.
⚠ The quality dependency: well designed UID-string naming convention.
RACF uses groups.
Security Server Security Administrator's Guide
The group concept is very flexible; a RACF group can be equated with almost any logical entity, such as a project, department, application, service bureau customer, operations group, or systems group.
(pag 4 Admin guide)
Security Server RACF General User's Guide
As a RACF user, you belong to a default group.
You are automatically connected to that group when you log on.
However you may be defined to more than one group.
(pag 43 Users guide)
Unix linux uses groups.
Unix uses a Primary Group set with the user.
Access rights are set by assuming everyting is a file. Granted to the owner, a named group or everybody.
LAN AD is using securioty identifiers similar to groups.
Active Directory uses a structured data store as the basis for a logical, hierarchical organization of directory information.
An Security identifier (SID) for a user or group is complex object.
Access rights are set by assuming everything is an object. A complex of access rights are granted to objects using SID-s.
No matter what commercial product is used it will lacking support for complex situations eg in signing sensitive approvals for multiple persons on dedicated conditions. 👁❗ Focus on external opinions, external technology is the same, seen everywhere.
Failing in qualtity, time in deliveries and cost.
A hopefull start to become process oriented for safety, cyber security.
advanced security analytics
Trellix formerly McAfee (2024).
mcAfee is consolidated with others.
SOAPA ( Oltsik 2016)
Enterprise security operations and analytics requirements are forcing rapid consolidation into something new that ESG calls a security operations and analytics platform architecture (SOAPA)
A new name again:
SOAR (security orchestration, automation and response)
SOAR (security orchestration, automation and response) is a stack of compatible software programs that enables an organization to collect data about cybersecurity threats and respond to security events with little or no human assistance.
The SOAPA SOAR difference:
SOAPA vs SOAR
As security guru Bruce Schneier would say, “security is a process, not a product.”
Similarly, the SOAR term focuses on the technology directions of security operations processes rather than the processes themselves.
👁❗ Commercial products sadly are technology driven oriented, ignoring the process.
Components as topics on their own and on top holistic
There are a lot of safety attention points:
Securing system services: OS, network, infrastructure at rest, in use & change activity.
Securing middelware tools, platforms: infrastructure at rest, in use & change activity.
Securing business logic: at rest, in use and change activity, release management.
Securing business data information: at rest, in use and for connections interactions.
Securing business processes monitoring analyses at rest, in use and in transit.
Confusing and blocking are:
User functional attributes configured by security tools.
For example memory of CPU usage limits
User technical attributes configured by security tools.
For example the default primary group and home location
Kali linux
However, if you're a professional penetration tester or are studying penetration testing with a goal of becoming a certified professional, there's no better toolkit - at any price - than Kali Linux.
Although the business, orgnsiation, is accountable the awerness for that is lacking.
👁❗ Ignoring the safety process, requirements, is common by attention-grabbing technology. 👁❗ Of the long list of safety attenion points a very small part has some attention.
Removing bottlenecks, continous flows
Integrations of object, deployment of business applications, how to align with compliancy rules at the business perspective?
That is a topic that is complicated situation. cynefin
The complicated domain consists of the "known unknowns". The relationship between cause and effect requires analysis or expertise; there are a range of right answers.
The framework recommends "sense analyze respond": assess the facts, analyze, and apply the appropriate good operating practice.
⚖ R-1.6.3 Historical lockin information flows
There are several types of information processing, there is no strategy or vision in place at the moment.
Information flow by type & goal:
Operational plane: processing information set by missions, visions.
Analytical plane: processing information improving the operations, visions.
Research using information set by missions, visions.
👁❗ A cultural change is required to solve these kind of gaps.
⚖ R-1.6.4 Historical lockin master data
For master data understanding the goal is understanding each other and understanding what is going on.
Nothing is in in place at the moment, anything would be an improvement.
A limited but fundamental shortlist:
Changing the wrong thing: "developer problem".
Focus on the wrong thing: "developer interests".
Putting Lean and Systems Engineering Together.
👁❗ A cultural change is required to solve these kind of gaps.
The triangle Operations Technology, Business Algorithms
Having technology and algorithms, ⬅ 💣 forgetting how to run those operational well secured.
Defining algorithms run operational, ⬅ 💣 failing to have well selected appropiated generic technology.
Getting technology for operations, ⬅ 💣 forgetting the goal of algorithms in business applications.
R-2 ICT service gaps Understanding: getting them solved
R-2.1 Seeing ICT Service Gap types
Applications are business organisational artifacts served by technology.
Business rules, business logic, are set by the organisation.
Security architecture (technical)
Operational risk (functional)
Privacy - impact
Process - impact
Service gaps are in each of those four areas.
Using solutions to solve the service gaps.
⚙ R-2.1.1 Deducing the reason for ICT service gaps
Desing of the information flow, assembly line
It is not only blindly following instructions managing the product process flow.
The value stream has mandatory requirements to be fulfilled to be shown for the products in the portfolio.
During design and validation of the product they should get materialized.
As long the product is relevant, that is customers are using it or are able to refer it, the information of the portfolio product for dedicated version should be at least retrievable.
To get covered by information knowledge by a portfolio:
Security architecture (technical)
Operational risk (functional)
Privacy - impact
Process - impact
In a figure:
See right side.
Artifical Intelligence obligations
AI products
An AI product is a software or hardware solution that integrates artificial intelligence technologies to automate tasks, support decision-making, and enhance functionalities. Using AI (machine learning or deep learning algorithms), an AI product involves solving specific problems, optimising processes, or providing intelligent insights based on using data to improve the process aspiring to achieve human level ability.
It is a interesting idea to see AI as a product to deliver to the ones that do the real information flow processing.
A product like the well known ANPR (Automatic number-plate recognition) software to deliver as a tool to others, similar to a drilling milling or lacquering unit, in their information flows.
There are several concepts:
The Core mission information flow: the assembly line.
🕳 Hampered by technical hypes and not in control.
Platform usage for building up information flows: part at assembly units.
🕳 Hampered by misunderstanding in mission goals.
Models by AI creating a new type of tools.
🕳 In the hype not been seen as just another tool.
Until recent just simple information transformations were the standard in cyber administrative information processing.
A lot of AI resistance in getting too much mistakes with misunderstandings and the missing safe state and correction options for mistakes.
The four challenges to needed to get solved that are structural related to a portfolio.
data driven BI&A
The SIAR model is the highest abstraction of processes in many dimensions.
With four stages in four quadrants the holistic overview is placed in the middle.
In the highest abstraction the middle (center) is symbolised an eye.
An intermediate of the SIAR abstraction:
A flow left to right, clockwise cycle pull at the bottom right to left, push for the flow at the top
At each of the four internal pillars: operational (red), office (administration green), optimization (business architecture).
Four quadrants results in: a square, Operational plane eight information storess consolidating into a circular.
The consolidated circular store: Analytical plane consolidating into a central one eight plus one (nine).
A figure:
See right side
S South: Situation, Steer I West: Input, Ideas A North: Actions, Analyse R East: Result, Request
Proces flow value stream.
An early SIAR figure for process flow.
It is full with colours, blue is for the operational process flow, green for the assembly manufacturing and yellow for the control (pull).
The process of engineering an enterprise operatinal system.
SIAR an alternative in another materialisation of two dual components with the well known PDCA, DMAIC OODA cycles.
See right side ⚖ 1 Identify customer value. 📚 2 Map the value stream. ⚒ 3 Design logical Flow. ⚙ 4 Establish Pull request. IV - III ⚙ 4 Implement Push delivery I - II 🎭 5 Seek Perfection.
⚙ R-2.1.2 The Administrative Cyber System Life Cycle
Rationale
The SDLC, system development life cycle, enables:
processes by missions, evaluation: operational risk
has safety requirements known by cyber security
has privacy impact requirements for input and results
has safety requirements for impact at results
When this activity is not well understood or not well controlled is is easily gets into a garbage mess.
Implications
There are conflicts in accountabilities responsibilities:
Interests and accountabilities are at the organization.
Legal mandatory obligation for each case: know used tools versions in the assembly.
Operational execution by technology service.
Technology service has no self driven interest to fulfil the organisation legal obligations.
Historical grown ideas following just technical hypes, are blocking factors to do this well.
⚙ R-2.1.3 Safety at Administrative Cyber Systems
Rationale
Safety, cyber security, is technology enabling:
Risk driven for processes by missions
Indispensable safety processes by missions
has privacy impact requirements for input and results
has safety requirements known by process impact at results
When this activity is not well understood or not well controlled is is easily gets into a garbage mess.
Implications
There are conflicts in accountabilities responsibilities:
Interests and accountabilities are at the organization.
Many legal mandatory obligations exists.
Operational execution by technology service.
Technology service has no self driven interest to fulfil the organisation legal obligations.
Historical grown ideas following just technical hypes, are blocking factors to do this well.
⚙ R-2.1.4 Administrative Cyber Systems, the Operational Plane
artifacts in process patterns
The visible materialized data, information representations:
Extract and load materials into a Landing area
Validate the material at landing placing them into Staging
Prepare Staging for transformation processing at Semantic
Deliver transformations results into Databank
Between those data materialisations there are processing activities.
Information flow, using closed loops
Monitoring what is going on, closed loops on operational flows should be in place.
When there is a high vital product data an additional flow evaluating new options before changing process flows is needed.
In a figure:
Process change control by four lines
Changing process flows is done by changing orchestrated four dependent process activities in the standard pattern.
In a figure:
Rationale
Information processing using static representation in a flow with process changes for other artifacts is a mindset.
Understanding process flows by missions
Understanding how to change process flows in missions
helps in knowing at what point in flows there are safety or other risks
helps in knowing where historical information in the operational plane is required.
Historicals are important but not got priority by technical limiatations.
Not giving this the needed attention is a high risk for the organisation.
Implications
There are no conflicts in accountabilities responsibilities.
Interests and accountabilities are at the organization.
Technology service can help in realizing a glossary dictionary.
Historical grown ideas following just technical hypes, are blocking factors to do this well.
⚙ R-2.1.5 Master Data, Communication: Administrative Cyber Systems
Rationale
Master data, communication is enabling the organisation:
Understanding processes by missions
Understanding the drivers for change in missions
helping in empowering the workforce
Not giving this the needed attention is a blocking factor for all activities.
Implications
There are no conflicts in accountabilities responsibilities.
Interests and accountabilities are at the organization.
Technology service can help in realizing a glossary dictionary.
Historical grown ideas following just technical hypes, are blocking factors to do this well.
R-2.2 Solving: The ICT-SDLC challenge
Applications are business organisational artifacts served by technology.
Business rules, business logic, are set by the organisation.
Methodlogies to follow by technology are:
ALC-V1: Dictate instructions what to do, to achieve
There are layers for an functional organisational technical solution.
Multiple business units (verticals, tenants) with each multiple products coexist in a cooperative environment. ❶ Business applications served by a multiple tenants philosophy:
Value streams, Business logic: code, instructions, in flows and life cycles (SDLC).
Materials, Business data: information - metadata at several stages.
Closed loops (lean): Dashboards, reports, analytics on operations with life cycles.
❷ Tools, middleware, platforms, supporting "Business application" as a service.
⚠ Interactions by several tools are very likely.
Every tool middleware, platform subject to multiple lifecycles (SDLC):
Value streams supporting business: dedicated segregated configurations.
Tools, middleware, platforms being out of the box software.
Infrastructure: Configuration and settings for the tool interacting with the operating system and hardware (datacentre), the cloud.
In a figure the layered pyramidal structure,
See right side:
⚠ Anything is "data": Software, tools, materials, business data, business logic, operating system, network is technical data.
Only the materials business data is information at value streams. 💣 Anything is "data" is too volatile uncertain complex ambiguous.
⚖ R-2.2.2 LCM Basics for Business Applications
ALC Application Life Cycle principles
❶ Doing release management is about promoting artifacts or more complex objects (entities).
Knowing what it is technically and logically about, is a prerequisite.
From an old document valid high abstracted questions.
In order to successfully move an entity from one environment to another, a number of key questions must first be addressed:
Exactly what needs to be promoted?
Are there configuration dependencies which need to be resolved prior to promotion?
How can concurrent updates be prevented?
What impact will this promotion have on the target environment?
Are there developmental differences which need to be resolved prior to the promotion?
How can regression be prevented while migration is in progress?
How is it possible to determine if the promotion is successful?
👉🏾 This abstraction is technology agnositic, valid for a lot of development types.
ALC-v2 Software Development lines
❷ In the classic application life cycle mangement (ALC-v2) the focus is on programs, software that can be run (executed).
All artifacts in scope are stored archived in a "Software Library".
The promotion is als follows:
D goes into T
T goes into A
P archives into Z (previous production versions)
A goes into P
Additional non functional requirements:
Data, information, is not in scope. There are walls for safety in sepearations.
Virtual machines or other measures preventing unwanted interactions on technical resources.
❸ What is defined is:
software library. These are storage locations where the source in any type is located.
The master, main development, line. Used for normal maintenance.
Two types of artifact components:
Artifacts components that have the attribute of virtual reuse of another artifact
Artifacts components that are unique present to a dedcicated environment
Artifacts components that are unique for content to a dedcicated environment should get minimized.
These are blocking components and risks in release mangement, life cycles. ❹ With parallel development it gets more complicated. What is changed in one line must possible backpropagated in others.
The devil is in the details of possible backpropagation.
An emergency fix development.
One or more parallel lines. The goal for these are:
a complete redesign, refactory of an existing product
new (sub)product lines (green fields) planned to get integrated in an existing master.
An explanation attempt,
See left side:
(video - controls)
Emergency fix: quick
Parallels: awaiting
Master: normal time
👉🏾 This abstraction is technology agnositic, valid for a lot of development types. ❺ The decision for updates whether they are to propagated to other lines or are obsolete can't be an automated decision.
Only the involved developers can know and should decide on what to do.
The decision for propagation in what line and what to merge can't be an automated decision.
Lieutenants are coordinating the work of developers.
Dictators are coordinating the work of lieutenants.
Customers the real emperrors are deciding on the work by dictators.
👉🏾 This abstraction is technology agnositic, valid for a lot of development types.
ALC-v2 Software Development lines.
❻ The distribution to involved machines other than the one being the source.
An additonal Acceptance and multiple production machines is just an example.
In a video:
See left side:
(video - controls)
👉🏾 This abstraction is technology agnositic, valid for a lot of development types.
❼ Artifacts components that have the attribute of virtual reuse of another artifact.
Advantages, removes all complexity :
for the inventory what has changed
by required backpropagations
Disadvantages :
Not possible with all technologies
Usual challenge: it is not well understood.
An explanation of the difference what is seen and what is really physical present.
Three possible scenarios
A video,
See left side:
(video - controls)
👉🏾 This abstraction is technology agnositic, valid for a lot of development types.
⚖ R-2.2.3 Advanced LCM topics for Business Applications
❽ Exporting metadata from one dictionary (database) of any type to another is the solution for release management.
Adding related documentation to the portfolio is the finishing touch.
Dedicated tools are needed to export and import the artefact.
In a figure,
See right side:
👉🏾 This abstraction is technology agnositic, valid for a lot of development types.
ALC-v3 value stream flows.
❾ The ALC-V3 (See R-2.1.4) Does a split of the flow in four steps.
Each of the four lines follows the ALC-v2 constructs.
What is added a dependency in time-constraints:
Creating Landing information (0,1) finished before Staging.
Creating Staging information (2,3) finished before Semantic.
Creating Semantic information (4,5,6) finished before Databank.
Delivering information artifacts (9) is after creating Databank.
The disposal goes in a reversed order.
👉🏾 This abstraction is technology agnositic, valid for a lot of development types.
Release management, deliveries, connections in flows.
❿ Next level: Using AI building block, exchanging a component to other flows. 🚧 Retrospectives and corrective actions are needed. 👐 Goals, criteria: solving the real bottlenecks hampering the organisation.
R-2.3 Solving: Safety perspectives
Components (tools) purchased, middelware:
DBMS: database mangement systems
File transfer, information exchange tools
ERP: Enterprise Resouce systems
ELT: data processing tools
BI&A: Analytics & reporting tools
..
Intention: enabling a safe environment holistic.
⚙ R-2.3.1 Safety perspectives within the organisation
Who is securing information by roles relations (I)?
❶ The layers by technical provisions:
Business applications served by a multiple tenants philosophy.
Tools, middleware, platforms, supporting "Business application" as a service.
Infrastructure, Datacentre, the cloud, a service enabling tools, middleware, platforms.
All three needed to be covered by safety, administrative cyber security controls.
The most well known activitiy is assigning natural persons, staff, to roles in the security administration.
This is a just a partial view on the landscape of all artifacts, objects to get secured.
⚠ Only able to see assigning security roles is a fundamental threat for the organisation. 💣 Safety, cyber security, manages volatile uncertain complex ambiguous situations.
Who is securing information by roles relations (II)?
❷ The easy way is using the HR department for asigning roles.
The result of this is a lot of issues at platforms and infrastructure that are propagating into business applications.
Devils triangle security
See figure right side
Role based Access Control: Keys, accounts, groups, security identifiers I
❸ Even more confusing is that in this triangle by responsibilities is: there are at least four technical subjects to manage.
Identities, not only the personal ones but also non personal technical.
Group identities for decoupling access right constructs from personal dependencies
Authorisations for business resources and processes.
Authorisations for technical resources and processes.
Role based Access Control: Keys, accounts, groups, security identifiers II
❹ The complexity in dependencies by logic and the related technical implementations is overwhelming.
Techncial alerts are complicating the landscape further.
Another devils triangle security
See figure right side
Role based Access Control: Keys, accounts, groups, security identifiers III
❺ The functional and technical challenges are extending into organisational ones.
Who is responsible accountable for a business application shoud be made clear.
Who is responsible accountable for a tool shoud be made clear.
💣 The usage of a key /account should be traceble to a natural person.
Shared usage of accounts by natural persons to avoid.
Aside keys groups other attributes could be: date / time, machine hardawre identification, geo location, used network connections, skill level, autorization level.
A natural person can have assigned multiple keys / accounts.
Using high privileged rights segregations by different accounts is mandatory.
Safety on materialised information (at rest)
❻ Wat looks simple is in reality terrible complicated.
All information in one of the DT,A,P environments should be in a same security context segment for operational usage.
When there is an analytical plane in place controlled managed gateways should be in place.
The rules for business information are not the same for business rules (code logic).
Another devils triangle security
⚠ These are real business information assets to manage.
See figure right side
Operational execution, operators
❼ The operators (production), acceptance testing, system & program testing have anothere type of needed accesss.
Privileged acounts are to get implemented for safety on logic, code software, and information.
Another devils triangle security
⚠ These are real business information assets to manage.
See figure left side
⚙ R-2.3.2 Safety perspectives combined to release management
ISTQB - Test quality an ICT specialist
❽ The capabilities for software testing is standardized with vision.
Safety is an indespensible part of testing.
Certifications and an international organization.
ISTQB® was established in 1998 and its Certified Tester scheme has grown to be the leading software testing certification scheme worldwide.
The
ISTQB Agile Test Leadership at Scale (syllabus) connecting lean.
Source: LEI (Lean Enterprise Institute).
❗ There are two types of values streams!
A value stream is a concept that originates in lean management.
Value streams are groups or collections of working steps, including the people and systems that they operate, as well as the information and the materials used in the working steps.
In value-driven organizations, quality and testing roles help to optimize the whole value stream, not just testing.
There are two typical types of value streams: operational and development. 👁 Operational value streams are all the steps and people required to bring a product from order to delivery (LEI, no date). 👁 Development value streams take a product from concept to market launch (LEI, no date).
Key aspects of value streams are to understand the lean concepts of flow and of waste (non-value-adding activities).
Safety practice - Testing validating with quality
❾ There are guidelines to do seperations in activity lines (iso/iec 27002):
12.1.4 Separation of development, testing and operational environments
Development, testing, and operational environments should be separated to reduce the risks of unauthorized access or changes to the operational environment.
⚙ R-2.3.3 Safety perspectives privileged accounts
PIM privileged identity, PAM privileged access
❿ Isolating privileged using NPA-s (non personal accounts).
Avoiding:
Ad-hoc interactive usage as the solutions for operations
Direct access from natural persons, not or difficult to monitor
Operational dependency on accounts of natural persons
PAM pop, principles of operation (microsoft)
Privileged Access Management keeps administrative access separate from day-to-day user accounts using a separate forest.
The PAM approach provided by MIM PAM is not recommended for new deployments in Internet-connected environments.
MIM PAM is intended to be used in a custom architecture for isolated AD environments where Internet access is not available, where this configuration is required by regulation, or in high impact isolated environments like offline research laboratories and disconnected operational technology or supervisory control and data acquisition environments.
👁 Entra Privileged Identity Management
(PIM) is a service in Microsoft Entra ID that enables you to manage, control, and monitor access to important resources in your organization.
These resources include resources in Microsoft Entra ID, Azure, and other Microsoft Online Services such as Microsoft 365 or Microsoft Intune.
Organizations want to minimize the number of people who have access to secure information or resources, because that reduces the chance of
a malicious actor getting access
an authorized user inadvertently impacting a sensitive resource
👁 However, users still need to carry out privileged operations in Microsoft Entra ID, Azure, Microsoft 365, or SaaS apps.
Organizations can give users just-in-time privileged access to Azure and Microsoft Entra resources and can oversee what those users are doing with their privileged access.
Not good: Using Privileged Identity Management requires licenses.
Privileged Identity Management provides time-based and approval-based role activation to mitigate the risks of excessive, unnecessary, or misused access permissions on resources that you care about.
Here are some of the key features of Privileged Identity Management:
Provide just-in-time privileged access to Microsoft Entra ID and Azure resources
Assign time-bound access to resources using start and end dates
Require approval to activate privileged roles
Enforce multifactor authentication to activate any role
Use justification to understand why users activate
Get notifications when privileged roles are activated
Conduct access reviews to ensure users still need roles
Download audit history for internal or external audit
Prevents removal of the last active Global Administrator and Privileged Role Administrator role assignments
👉🏾 Abstractions for mentioned technologies for the goal become technology agnositic. 🚧 Retrospectives and corrective actions are needed. 👐 Goals, criteria: solving the real bottlenecks hampering the organisation.
Compliancy questions are applicable everywhere internal and external for an organisation.
Although this is the technical pillar representative roles to the ones in the organisational pillar are needed.
Support for the organisational:
CSO Chief Security officer
CDO Chief Data officer
CFO Chief Financial officer
COO Chief Operations officer
Similarity using the SIAR model holistic and at the technical pillar is intended.
⚙ R-2.4.1 Data / Information Governance: Safety
Generic process flow model
Every process has some input and output / results to deliver. For processing
The components at rest and the information in transit going through several steps.
There is a important difference at D Develop being the only line developing new logic. The Test Acceptance Production (operations) very similar.
In a figure:
A shadow Production or Acceptance can act as fall back for the Production line when not sharing critical components.
Hardware as physical component is very often shared by having it in a datacentre.
✅ The classic DR, Disaster Recovery, focus on recovery of losing physicals.
Aside the physical access losing the logical information is a risk. It could happen by accident unintended but also intended by a bad person.
Ransomware is an incarnation of intended logical destruction.
✅ The classic BR, Backup Recovery, focus on recovery restoring logical content.
Availability mitigations
Having those recovery strategies for physical and logical components, they are costly and you never really want to use them. 😱 I never will understand why the responsible accountable ones are doing cost savings by ignoring recovery strategies.
Buying some expensive physical components for physical hot stand by and than doing a claim logical content recovery is not necessary anymore, is not understanding risks.
DR exercises for physical recovery, only successful after several attempts are a fail. ⚖ 😉 By understanding the business impact, acceptable choices are possible. Why would you need development for a limited time when doing a technical migration?
Information flow, using closed loops
Monitoring what is going on, closed loops on operational flows should be in place.
When there is a high vital product data an additional flow evaluating new options before changing process flows is needed.
In a figure:
⚙ R-2.4.2 Data / Information Governance: Flow
Confidentiality, Integrity, Availability
A figure,
See right side:
⚙ R-2.4.3 Data / Information Governance: Explainablity
solving missing: backstop, parallel alternative selection
solving missing: backstop, parallel alternative selection
Initiatives Result & More.
SMART buzsword to get an ohter name by reordering and using others words.
 
SIAR not STAR, PDCA
When seeing and recognizing an issue when involved and committed to the process the following questions arise:
Why?
Possible improvements?
Who can help?
What can I do?
When to do it?
Situation going for Initiatives that are by Actions getting into Results: SIAR
"T" (Task) replaced by I. Tasks are dictated. Initiatives are using the experiences from what is going on.
The PDCA cycle is the same, shifted IARS. Do is actions and Act (decide what next) analysing the Situation.
The external input, external service provision (left) and external ouput, external delivery support (right) is not in a logical time order.
To break this illogical order somewhere in the continous cycle a start must be made.
Processes - Continuity, Availability - DR BCM.
Business continuity is not only about having a well secured physical an logical environment but also: what to do when some things are going terrible wrong.
Losing the physical access or the logical can stop all processes for an organisation.
R-2.5 Solving: Communication Cooperation
The simple question: "Whose Job Is It, Anyway?"
There was an important job to be done and Everybody was sure that Somebody would do it.
Anybody could have done it, but Nobody did it. Somebody got angry about that, because it was Everybody´s job.
Everybody thought Anybody could do it, but Nobody realized that Everybody wouldn´t do it.
It ended up that Everybody blamed Somebody when Nobody did what Anybody could have.
What struck me was that in the discussions terms often came back that could mean the same thing.
Sometimes they talked about interventions, sometimes about subsidies and sometimes about regulations or openings.
The coordination took the project a lot of time.
This rised the idea to recognize and define these concepts and to set up a conceptual information model for the agricultural domain.
The starting point? The European Common Agricultural Policy (CAP), the associated EU regulations and the Dutch reflection thereof: the National Strategic Plan (NSP), supplemented with NL legislation.
Many hundreds of pages of 'pure reading pleasure'!
But in those many hundreds of pages was also the challenge: how could we get all those years of knowledge and expertise from the documents and heads of the experts onto 'paper', without sedating them, kidnapping them and interrogating them 24/7 in a shed somewhere?
The solution for miracles of our time: artificial intelligence.
What was the value of a VE 'virtual expert' for the entire modeling process?
The VE played an important role in the modeling process and the process consisted of the following steps:
Drawing up a model of concepts for the domain
Organizing concepts and recognizing data areas
Defining the concepts, using the right sources
Drawing up concrete examples of these concepts
Drawing up the information model based on the example sentences
Validating the information model using example sentences.
How to set up the 'virtual expert'.’? It's actually very simple (and not rocket science at all)!
Below is a step-by-step plan with simple 'prompts' with which you can build your own 'virtual expert'.
By the way, used was OpenAI's ChatGPT, but perhaps better/other versions are available.
Create an account at openai.com
Go to the option "my GPTs"
Please note: you must have a paid account to build a GPT yourself.
Select the option: "create a GPT"
Select the option: "“configure"
Give the GPT a name such as "Subsidy Expert"
Give the GPT a description: "This expert knows all the types of subsidies that exist in the Netherlands."
Give the GPT a series of instructions in the instruction field (experiment!):
You are an expert in the field of . You communicate briefly and concisely and use informal language and preferably no bullet lists (unless I ask for it).
You always search first in trained documentation and only then in other knowledge sources.
You set up non-circular definitions and do not use terms that are synonymous with the concept to be described. etc...
Then upload the desired files - what is the context that the GPT needs to know?
Note: only provide open data to the GPT with this!
Tweak the model, adjust it as you wish... good luck!.
Connecting a thesaurus to naming standards
A thesaurus is a logical theoretical construct, naming standards, usage standards are realisations for coding and reporting.
Data, information has stages in a cycle (six):
Plan: what kind of information is needed
Design: preparation with scale of measurements
Realisation: create A thesaurus, master data management system
Manage: store, archive or destroy the content for the thesaurus information
Usage: simple usage of specified elements in the thesaurus
Insight: Getting wisdom using interactions of multiple elements in the thesaurus
In practice the challenge is to build a thesaurus from the specialist doing the their work. 💡👁❗ Preparing for a data literacy structure: "Data driven work".
The evolution of knowledge to insight
❶ My first site was an description of what has happened. ❷ The second tried to improve on to get prescriptive on how it should work. ❸ This third is going to predictive, from what has happened to what is possible to happen.
Old useful information should get a place in the new structure. It will give some waste that is not really waste.
The driver behind the last change is the idea of a framework and related tools "Jabes".
The problem with that idea is that there is no good fit in the existing situation of ICT. ❹ A needed correlated change:
What are the ICT gaps?
How can those ICT gaps get solved?
Why is the ICT change important for the future?
R-2.6 Maturity 4: ICT service solutions for gaps
From the three ICT, ITC interrelated scopes:
✅ I - processes & information
✅ T - Tools, Infrastructure
❌ C - Organization optimization
Only having the focus on IT4IT, getting a mature Life Cycle Management (LCM) requires understanding an acknowledgment of the layered structure.
Each layer has his own dedicated characteristics.
⚒ R-2.6.1 Leaving the comfort zones
Servicing technology - release change, safety
Just understanding some better technology concepts does not really help in changing a culture that is blocking in improvements.
😲 For release management there are for a long time legal obligations.
I have never seen those been implemented.
The biggest impediment is a mindset switch:
Knowing the used version of algorithms, business rules.
Knowing the used versions of involved platforms.
Knowing the used versions of involved infrastructure.
😲 A safe (Cyber security) is having for al long time legal obligations.
I have never seen those been implemented.
The biggest impediment is a mindset switch: it is an organisation accountability and responsibility not something that is to be outsourced as just technology.
It starts at the organisation with risk management.
Servicing technology - information flow, master data
Just understanding some better information flow does not really help in changing a culture that is blocking in improvements.
😲 For information flows there should be a valuable theory.
When structuring was appreciated: done by operations, also by developers.
The best fit these days is only: reverse engineering existing flows, process mining.
I have seen things been implemented, however all got removed. The reasons are: cost argument and conflict avoidance.
😲 Master data, data dictionaries are defining context for information, information flows.
I have seen once that was implemented, got dismantled with the cost-saving argument.
The biggest impediment is a mindset switch it is an organisation accountability and responsibility that this is not technology but of high value for the organisation.
It starts at the organisation with a shared glossary shared vocabulary understanding each other.
⚒ R-2.6.2 Detailed information for service solutions
Servicing technology
The first idea with Jabes was: it would be a technology solution.
After analysing, see "C-Server", that proofed to be wrong.
However technology has impediments for a structural change using lean.
There are solutions to solve those impediments but wanting to do those is having cultural impediments.
In the practical technology area I have many old pages, some are theoretical.
They should be subpages in one of the two technology areas (C-Serve r_serve).
technology theoretical - Life cycle
List of detailed old pages (sdlc):
Layers 👓 VMAP, ISTQB, Test management, part of ALC
Wondering why nothing has changed for many years, repetition of similar developments and events?
The technology buzz: lightning strikes (Bill Inmon sep 2024)
Recently in the morning I was talking with a venture capitalist. We were discussing technology, the marketplace, trends, and what is current – AI, ChatGPT, generative AI, and current trends. I was talking to the venture capitalist about technology that produced business value for the corporation.
I was talking about a corporation making more money, becoming more profitable, and having more customers.
I had always assumed that that is what corporations wanted to do.
The venture capitalist interrupted me and said – that isn’t what people are interested in.
People today are interested in buying into the technology of AI for the sake of having AI.
People are really into technology that is cool. Organizations are heavily into FOMO – fear of missing out.
Corporations don't want to be considered to be behind the times so they have to bring in AI, in one form or the other.
Making money and making more revenue is just not what sells today.
😱 Did I hear that right? Corporations are buying into the cool factor without considering the business value? Is that really true?
Then that same afternoon I was talking to a consulting company.
And lo and behold I had the same conversation.
The head of the consulting firm told me that technology was selling for the sake of technology, not for the enhancement of business value.
He said that corporations just weren’t interested in enhancing business value.
What corporations were interested in was appearing to be a modern corporation to the outside world.
Corporations just weren’t interested in more profitability, more revenue and more customers.
😱 To be honest, I could not believe what I was hearing.
➡ I assume everybody with a long experience at organisations is recognizing this.
It is not very often that a clear statement like this is made.
Hasn't the world learned anything from the many silver bullets that IT has brought to the business community over the years?
Hasn't the world learned to – first and foremost – ask for business value?
How many silver bullets have been presented as "the solution" only to disappear in a year or two.
If a technology does not bring business value then it has no staying power and will be swept away with the next tide.
The definition of insanity is to do the same thing repeatedly and to expect a different outcome.
The corporate community is having decisions being made by either very naïve people or insane people. (Does it matter which?)
Corporations keep buying into the silver bullet expecting their technology and IT problems to disappear.
➡ Recognizing what is going on is the first step in awareness.
I am hoping at some moment there is turning point leaving this weird cycle of just following others.
🤔 If technology – any technology – does not fulfill business value then that technology is not long for the earth.
And corporate management – once again – will have wasted a lot of money and opportunity.
Once again corporate management will have spent huge resources on the silver bullet.
🤔 In the Gartner hype curve the vendor produces a tremendous amount of hype in order to establish a product.
But when the corporation enters the trough of disillusionment, it is only genuine business value that pulls the technology back into a state of positive acceptance.
If there is no business value there, the technology withers and dies.
In a word – if technology is being bought and sold on the basis of cool, then that technology is in danger of being just another failed silver bullet.
In order to have a long term, sustained presence and value in the marketplace, the technology MUST produce viable, measurable, prodigious business value.
That is the most important and the immutable role of new technology if the technology is to survive.
Servicing technology, Improvements
Wondering why nothing has changed for many years, repetition of similar developments and events?
The culture desillusion:
why todays leaders dont value tps lean (Kevin Kohls sept 2024) Convincing leadership to adopt the cultural shift that comes with internal Continuous improvement.
This is where the howls of contention starts.
"Leadership should already know this,"
"The power of this tool should be obvious to leadership,"
"It's not my job to convince leaders to adopt this method,"” etc.
The list is long and filled with denial and finger-pointing.
🤔 In reality, people hired in 2000 are often in leadership positions 24 years later.
They have had minimal success with many of these methods since starting their jobs.
They have seen the 2008 recession, COVID-19, chip shortages, and profitability rise because demand is high.
They have seen the rise of robotic automation on the plant floor and in the office and the sudden rise of artificial intelligence.
These outside influences go beyond internal productivity. They have a greater chance of support and possible impact than lean, TOC, etc.
🤔 To this leadership group, outside influences not under their control have been the focus of their attention, not internal Continuous Improvement.
We can complain about the lack of leadership support, but we must accept that leadership will spend budget and headcount.
➡ We all see the struggles at organisations not knowing well how to change to achieve real improvement.
The initiatives for a profitable lean approach requires seeing the C-roles as customers.
It is not very often that a clear statement like this is made.
🤔 The other realization is that we have little empathy for leadership challenges.
Although we have never been leaders, we quickly point out that they are the root cause of the problem. We have no responsibility to change that.
The reality is quite different.
First, we forget that leadership is OUR customer.
Like any customer who buys a product on Amazon, customers will buy things that adequately address their product or service, often based on the product's Five Star Rating.
We in Continuous Improvement don't do that.
We try to convince them to buy a product with a poor reputation, do it regardless of the costs, and tell them what they should see as valuable as customers.
➡ We all see the struggles at organisations not knowing well how to change to achieve real improvement.
The initiatives for a profitable lean approach requires seeing the C-roles as customers.
It is not very often that a clear statement like this is made.
🤔 In addition, we have little empathy for their position.
They got the promotion and should be able to figure it out.
They are not a peer anymore. They are getting paid big bucks.
Let them figure it out by themselves.
We have explained the brilliance of our solution, and we can’t help it if they don’t see its rationale.
We fail to realize that commitment, which includes a commitment to a limited budget, will have to address their problems, not ours.
We must recognize that their decision to adopt a CI mindset will be more emotional than logical.
We may think they have access to vast amounts of money, but they only have a small budget, which they must see an ROI on if they want to continue to support this effort.
Finally, we think leaders should all be like someone else or run like another company.
They should have products like the iPhone and the mature kaizen mindset of Toyota, be driven to success regardless of the barriers, yet be sensitive to our daily needs—the wisdom of Gandi, the passion of Steve Jobs, the fierceness of Patton, etc.
...
➡ We all see the struggles at organisations not knowing well how to change to achieve real improvement.
The initiatives for a profitable lean approach requires seeing the C-roles as customers.
It is not very often that a clear statement like this is made. The lack of empathy to solve leadership problems is the biggest failure on the part of CI.
This is a fundamental mindset change in Lean.
We understand the need for ROI, so it's included in decision-making.
We emphasize a single goal, allowing clarity about priorities and delegation.
We attempt to solve their problems, not demand that they solve ours.
⚒ R-2.6.4 Getting ITC in control
Servicing technology - release change, safety
Just understanding some better technology concepts does not really help in changing a culture that is blocking in improvements.
😉 When release management gets improved do not expect it will be get finished.
Continuous improvement is what matters.
😉 When safe (Cyber security) gets improves by design, do not expect it will be get finished.
Continuous improvement is what matters.
Servicing technology - information flow, master data
😉 When better information flows are getting understood & implemented, do not expect those will be get finished.
Continuous improvement is what matters.
😉 When Master data, data dictionaries for information context, information flows gets into place, do not expect it will be get finished.
Continuous improvement is what matters.
R-3 ICT service adding value to missions of organisations
R-3.1 Avoiding ICT Service Gap types
Understanding what is going on what with all uncertainties and possible future scenarios is an everlasting quest.
A pity when there are a lot of misunderstandings by not having a shared ontology shared vocabulary.
A system that supports the ICT ITC Service and change Shape transformations is a gap.
Building up in mind set complexity:
Logical (genba-3): Understandable technology
Conceptual (genba-2): Basic Service provision
Contextual (genba-1): Continous change by decisions
⚙ R-3.1.1 Knowing understanding by ontology
Component: Enterprise Ontology 101
Enterprise Engineering, Enterprise Ontology is a good starting point for reviewing what is data about.
Enterprise Engineering the manifesto
There are two distinct perspectives on enterprises (as on all systems): function and construction. ...
The key reason for strategic failures is the lack of coherence and consistency among the various components of an enterprise. ...
It is the mission of the discipline of Enterprise Engineering to develop new, appropriate theories, models, methods and other artifacts for the analysis, design, implementation, and governance of enterprises by combining (relevant parts of) management and organization science, information systems science, and computer science.
Ontology is the philosophical study of being.
Abstract objects are closely related to fictional and intentional objects.
The ontological model of a system is comprehensive and concise, and extremely stable.
It is the duty of enterprise engineers to provide the means to the people in an enterprise to internalize its ontological model.
Separation of intention and content will create a new field - enterprise engineering - and make that intellectually manageable.
Administrative systems are processing data - information - for al long period using computers. Information is about abstract objects.
Options for measuring processes were limited as computer resources were limited and expensive.
Options to mange abstract objects were limited. That is all changing.
Example differences visions vs missions: ISTQB
A nice example on the difference between vision mission and what to do for missions.
(International Software Testing Qualifications Board , who we are): ISTQB
has the vision:
Defining and maintaining a Body of Knowledge which allows testers to be certified based on best practices, connecting the international software testing community, and encouraging research.
There is long list with missions:
We promote the value of software testing as a profession to individuals and organizations.
We help software testers to be more efficient and effective in their work, through the certification of competencies.
We continually advance the Testing Body of Knowledge by drawing on the best available industry practices and the most innovative research, and we make this knowledge freely available to all.
We nurture an open international community, committed to sharing knowledge, ideas, and innovations in software testing.
We foster relationships with academia, government, media, professional associations and other interested parties.
...
⚙ R-3.1.2 Start with what drives success
focus on what really drives success
Let's put the agile wars to rest and focus on what really drives success. (Wolfram Müller sep 2024)
Agile Methods are always the best and are defended by their people!
We've all heard the heated debates: SCRUM vs KANBAN, SAFe vs Less and XP was always better ...
... endless arguments about which method is the ultimate way to run an organization.
But let's be honest – if any one method was the silver bullet, wouldn't the debate be over by now?
The fact that the discussion still rages shows one thing: none of them are perfect. There must be something more.
Question: Why is project management the one for all attention and not the core products that are the ones for the value?
In my live I went through both extremes, just projects & just product ... in the end it's all about scaling.
If you have just one core product (what is seldom the case) and you want to scale then you have to specialize and build teams.
But they have to work together and deliver on time changes to the product ➡ that has typically project character.
If you build a new product than it has project character anyway.
There are companies who earn their money with projects e.g. special machinery or events or agencies.
Delivering and maintaining a product is the domain of production - delivery of smaller items with long waiting times.
In an bigger organization, you always have parts that show production character and some parts more projects ...
➡ often you have even separated value streams for each.
So i would say - it makes perfect sense to focus on the product (so that it solves a constraint of the customer).
In bigger organizations you need a production value stream ( ➡ run) and a project value stream ( ➡ change), but both cases need FLOW.
Question, Remark: I am missing the closed loops feedbacks for informing what is going on and wether improvements are achieved in an objective measurable understandable context for the defined agreed flows.
The reply:
Yes that is often missing, maybe you read it in my comment, you have to have parts of the organization taking care of the product.
What we often see is that even in product oriented organizations these parts are disconnected from the customer and customer value stream.
They often have no idea about the constraint of the customer and better the constraint of the customers customer.
So I'm fully with you, but you can foster the feedback loops also in project environments.
At "1and1.com" (now ionos) where i was responsible for the PMO we had special project streams for products they delivered new products (not just variations) within weeks (2-6).
That is just possible if you have a full flow project organizations e.g. the roll out of the DSL 16MBit (ok it's some time ago).
The project idea came up at the 28-11 and the first customer was online 07-01, no chance with a production organization.
That is just possible with projects and maybe you remember we had daily feedback to swallow from the customers :-).
The balance between change and run.
➡ If you look at an organizations from outside you don't see the inner structure, you just see typically one value stream.
The focus should be there: what product delivers a real value (e.g. solves a constraint from the customer).
➡ There is typically a second stream, updates of the product or new products, this is more hidden.
If you try to optimize the team, then you get mega silos and the constraint in the value stream gets overloaded and the overall performance drops.
The focus should be on the constraint of the value stream (build & round) and often (marketing/sales).
⚙ R-3.1.3 Looking for a chart representing the enterprise area
charting a virtual floorplan
Cartography,
the art and science of graphically representing a geographical area, usually on a flat surface such as a map or chart.
It may involve the superimposition of political, cultural, or other nongeographical divisions onto the representation of a geographical area.
An abstraction of the shop floor for the virtual administrative world is not very common.
In the 6*6 representation of areas with ordered axis there is a start for a map.
From many dimensions a projection into one of 2 dimensions.
Limiting that to what arround somebodies position, is seeing only 9 areas.
Lean at the shop floor genba2 genba3
For the: Way of Working at the shop floor there are two perspectives:
Delivering and maintaining a product: domain of production
Product Updates or new products (more hidden): domain of change development
In a figure:
See right side.
⚙ R-3.1.4 Jabes Vision: Change ICT into a service culture ITC
Jabes for ITC ICT a vision for a cultural mindset.
The value stream has mandatory requirements to be fulfilled to be shown for the products in the portfolio.
It is a challenge how to create instructions to be followed managing the product process flow.
During design and validation of the product the solutions for the challenges should get materialized.
As long the product is relevant, that is customers are using it or are able to refer it, the information of the portfolio product for dedicated version should be at least retrievable.
To get covered by information functional knowledge by a portfolio:
Systems deliveries architecture
Safety, security architecture
Systems product flows architecture
Information understanding master data
Going for this approach to deliver to customers the mindset at the service organisation should be ready with that mindset.
In a figure:
See right side.
Practice what you preach
To get covered in knowledge by a portfolio:
why:
Input: value stream of the product.
The input can be seen as an extract from some source.
Result: value stream of the product
The result can be seen as a load to some destination.
Process: One or more transformations in the stream to build the product.
Note: the revival of "ETL" at high level. The ELT is seen in transformations.
Consequences of functional accountabilities (what)
Impact to organisational accountabilities (who)
In a figure:
See right side.
Portfolio Process: ideate, initiate & technology validation in an infographic:
⚖ Goal: Processes known & in control by all their asspects. ⚒ Craftmanship, deliverables & receipts:
reviewing: quality of processes results delivered & expectations
◎ ⇄
reviewing: quantity of process load delivered & expectations
⇄ ⇆
Change and innovation alignment, following and/or initiating
⟳⇅
Evaluating Proposals innovation & changes for processes & technology
⟳⇅
Evaluating Proposals expected capacity load for processes & technology
ICT becoming a customer of ITC
The jabes framework, related tooling, knowledge and way of working should get promoted.
The most logical first step, there must be someone with interests, but who? 💰 The one accountable for a product, responsible for managing what is in the portfolio.
A big organisation is able to have all knowledge and skills for information technlogy service in house.
A small company is dependant on the service servicee
Improvement idea: 💡👁❗ Standardise the Information Service so that the quality for product support and safety is well known and controlled.
All whether big or small not suffering by unpredictable uncertainties. 📚 What is going on is that the software devlopment cycle is entangled wiht qualiy testing in a different time window and both are entangeld wiht safety.
Only the Product accountabilty (CPO role) and Safty accountability (CSO role) will alwasy be within an organisation.
Improvement ideas:
A clear organisational role for the Chief Product Officer.
A clear organisational role for the Chief Safety Officer.
A new organisational structure for: "Closed Loops".
A data literacy structure: "Data driven decision making".
An approach for going holistic lean agile at all genba levels.
A new organisational structure for: Information Service.
At the next paragraphs each of them is a result of reasoning.
R-3.2 Continous improvement of Systems
Managing information systems in a continuous changing world requires continuous adapting and being prepared for abandoning obsolete solutions creating new solutions in time.
Change is the only certainty.
A pity when there are a lot of misunderstandings by not having a shared vision, mission.
Building up complexity by mind set:
Logical (genba-3): Understandable technology
Conceptual (genba-2): Basic Service provision
Contextual (genba-1): Underpinned changes by decisions
⚙ R-3.2.1 Knowing, understanding by measuring
The long running technical battles
💡 Let's put the technology wars to rest and focus on what really drives success.
We've all heard the heated debates: Windows vs Linux, Oracle vs another DBMS and (name: a programing language) was always better.
Endless arguments about which method, technology, is the ultimate way to run for an organization.
But let's be honest, if any one method, technology, was the silver bullet, wouldn't the debate be over by now?
The fact that the discussion still rages shows one thing: none of them are perfect. There must be something more.
Process, algorithms quality
❶ Only developing a system is not telling anything on the quality what has been build.
Testing, verifying is a capability that solves the the question of wat level of qualtiy is what has been build.
This is measuring what can be measured and doing a comparision to what is needed by requirements and /or standards.
ISTQB is specialised in test qualifications. From Test Analyst an ICT specialist:
Test conditions are typically identified by analysis of the test basis in conjunction with the test objectives (as defined in test planning).
Testing in the Software Development Lifecycle: ISTQB Advanced Level Test Analyst (syllabus):
The overall SDLC should be considered when defining a test strategy.
The moment of involvement for the Test Analyst is different for the various SDLCs; the amount of involvement, time required, information available and expectations can be quite varied as well.
The Test Analyst must be aware of the types of information to supply to other related organizational roles such as:
Requirements engineering and management ➡ requirements reviews feedback
Project management ➡ schedule input
Configuration and change management results of build verification testing, version control information
Software development ➡ notifications of defects found
Software maintenance ➡ reports on defects, defect removal efficiency, and confirmation testing
Technical support ➡ accurate documentation for workarounds and known issues
Production of technical documentation (e.g., database design specifications, test environment documentation) ➡ input to these documents as well as technical review of the documents
❷ Planning for testing is a challenge because test execution only is possible after something is delivered.
Documentation are deliverables, code is a deliverable and a working system in a environment is a deliverable.
Test activities must be aligned with the chosen SDLC whose nature may be sequential, iterative, incremental, or a hybrid of these.
For example, in the sequential V-model, the test process applied to the system test level could align as follows:
System test planning occurs concurrently with project planning, and test monitoring and control continues until test completion.
This will influence the schedule inputs provided by the Test Analyst for project management purposes.
System test analysis and design aligns with documents such as the system requirements specification, system and architectural (high-level) design specification, and component (low level) design specification.
Implementation of the system test environment might start during system design, though the bulk of it typically would occur concurrently with coding and component testing, with work on system test implementation activities stretching often until just days before the start of system test execution.
System test execution begins when the entry criteria are met or, if necessary, waived, which typically means that at least component testing and often also component integration testing have met their exit criteria.
System test execution continues until the system test exit criteria are met.
System test completion activities occur after the system test exit criteria are met.
Iterative and incremental models may not follow the same order of activities and may exclude some activities. ...
Measurements, fundamental theory
❸ Doing testing requires a measurement applicable for what to test.
"In physical science, the first essential step in the direction of learning any subject is to find principles of numerical reckoning and practicable methods for measuring some quality connected with it.
🤔 I often say that when you can measure what you are speaking about and express it in numbers, you know something about it;
but when you cannot measure it when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind.
🤔It may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the state of Science, whatever the matter may be.”
(Lord Kelvin, 1893, Lecture to the Institution of Civil Engineers, 3 May 1883)
⚙ R-3.2.2 Start with understanding the requirements
Doing testing requires a measurement applicable for what to test. Requirements Elicitation (G.Alleman) 2024) ❹ Requirements the fundament for design, the why.
Needed change in the IT world
Here's some top-level guidance.
But first, a fundamental change needs to take place in the IT world regarding how to capture requirements.
It's called Capability-Based Planning.
Identifying System Capabilities is the starting point for any successful program.
Systems Capabilities are not direct requirements but statements of what the system should provide regarding "abilities." ❺ Requirements the fundament for design, functionality, capabilities.
Capabilities Based Planning
The critical reason for starting with capabilities is to establish a home for all the requirements.
To answer the question, "Why is this requirement present?" "Why is this requirement needed?" "What business or mission value does fulfilling this requirement provide?"
Capabilities statements can then be used to define the units of measure for program progress.
Measuring progress with physical percent complete at each level is mandatory for the technical assessment of the project's progress.
However, measuring how the Capabilities are fulfilled is most meaningful to the customer.
The "meaningful to the customer" unit of measure is critical to the success of any program.
These measures are necessary for the program to be cost-effective, scheduled, technically successful, and fulfill its mission. 👁 Starting with the Capabilities prevents the "Bottom-up" requirements gathering process from producing a "list" of all needed requirements that are missing from a well-formed topology.
This Requirements Architecture differs from the system's Technical or Programmatic architecture.
Capabilities-based Planning (CBP) focuses on "outputs" rather than "inputs."
These "outputs" are the mission capabilities that are fulfilled by the program.
Requirements need to be met to fulfill these capabilities.
But we need the capabilities first.
Without the capabilities, it is never clear whether the mission will succeed because there is no clear and concise description of what success means.
The CBP concept recognizes the interdependence of systems, strategy, organization, and support in delivering capability and the need to examine options and trade-offs regarding performance, cost, and risk to identify optimum development investments.
CBP relies on scenarios to provide context for measuring the level of capability.
❻ Requirements the fundament for design, performance.
Requirements Elicitation
Requirements are the defined attributes for an item before the efforts to develop a design for that item.
System requirements analysis is a structured or organized methodology for identifying an appropriate set of resources to satisfy a system need (the needed capabilities) and the requirements for those resources that provide a sound basis for designing or selecting those resources.
It acts as the transformation between the customer's system needs and the design concept implemented by the organization's engineering resources.
The requirements process decomposes a statement of the customer's need by systematically exposing what the system must do to satisfy that need. This need is the ultimate system requirement from which all other requirements and designs flow.
There are two fundamental classes of requirements:
Process Performance Requirements.
The Process Performance Requirements define how the work processes produce a beneficial outcome for the customer.
Product Performance Requirements.
The Product Performance Requirements define the product specifications and how they relate to the process requirements.
👁 There are functional and non-functional requirements as well as product and process requirements.
These non-functional requirements play a significant role in the development of the system.
Non-functional requirements are spread across the entire system or within individual services and cannot be allocated to one specific product artifact (e.g., class, package, component).
This makes them more challenging to handle than functional requirements. The specifics of the system's architecture, such as highly distributed services, also raise difficulties.
The Work Breakdown Structure (WBS) foundation distinguishes between process and product requirements.
The related Integrated Master Plan (IMP) and Integrated Master Schedule (IMS) also focus on this separation.
The success of the project or program depends on defining both the product and the processes that support or implement the product.
When properly connected, the Requirements Taxonomy, the Work Breakdown Structure, the IMP, and the IMS "foot and tie" the Performance Measurement Baseline (PMB).
This provides traceability of the increasing maturity of the deliverables (vertical), and the physical provides the percent completion of the work efforts (horizontal).
❼ Requirements the fundament for design, build & test.
Step-by-Step for Capabilities
Determine what the system is supposed to do in terms of Scenarios or Use Cases. This is a familiar approach.
Alistair Cockburn introduced the notion of Use Cases long ago. But they went astray because doing good Use Cases requires understanding what capabilities the customer needs.
What is the business problem or mission to be accomplished? How would you recognize that this problem was solved or the mission was accomplished?
Measures of Effectiveness are the units needed to confirm this accomplishment.
Assemble these Capabilities into a functional architecture, showing how each capability supports the mission or business need.
Develop a maturity flow for each capability, showing how the presence of this capability allows the business or the mission to do some work.
This, of course, is simple agile-working software.
However, agile now sees that bottom-up response to customer needs requires a programmatic architecture framework to ensure that the end is reached as planned.
This is Stephen Covey's Habit 2, Begin with the End in Mind, and is the Integrated Master Plan / Integrated Master Schedule paradigm of DOD 5000.02 procurement.
Imagine that 5000.02 and Agile are on the same page. See Agile+EVM=Success for guidance here.
As each capability appears, the project can start producing valuable services—do something useful.
But—and this is critical—the business or the mission MUST be capable of receiving this capability.
It does no good to have capabilities that cannot be used. The purpose of this diagram is to show what capabilities are needed and in what order.
This is a Top-down process done by the business or mission owners.
A Simple Step-by-Step for Requirements Elicitation
There need to be process and product requirements for each defined capability .
Each requirement MUST flow from a needed capability.
It requires a reason for being there, a parent, and fulfilling some needed capability.
As an aside - all requirements are derived.
This means all requirements are derived from the needed capabilities.
In some circles, this is not the paradigm, but in the complex, software intensive world of space flight and weapons systems -
All Requirements Are Derived from the Mission Statement or the Concept of Operations (ConOps).
These two documents usually need to be included in the IT world, but the resulting gap is that we need to know WHY we're doing something.
❽ Testing methodologies using sound underpinned theory and metrics.
Agile test leadership draws upon methods and techniques from traditional software quality management and combines these with new mindset, culture, behaviors, methods, and techniques from quality assistance.
ISTQB Agile Test Leadership at Scale (Body of Knowledge ): 👁 Traditional test management has a tendency to focus on managing and controlling the work of others.
Test management in the agile organization has a broader scope than solely focusing on testing the software.
By shifting agile test management to a quality assistance approach, agile test leaders spend more time enabling and empowering others to do the test management themselves.
The aim of this support is to contribute to the improvement of the organization’s QA and testing skills with a view to enabling better cross-functional team collaboration. 👁 Business agility also drives the move away from traditional management roles toward self-empowered delivery teams and enabling leaders (also called servant leaders or leaders who serve).
As a consequence, people in roles such as project manager and test manager sometimes struggle to find their place in organizations moving toward business agility.
This shift means that traditional roles (ad 1), such as test managers, test coordinators, QA engineers, and testers, need to dedicate more time and effort to foster the necessary quality management related skills and competencies throughout the organization rather than actually doing all the testing. ❾ Agility, Lean, effective efficacy, as business culture. 👁 With business agility there is a move toward preventing rather than finding defects, to optimize quality and flow.
Automation, “shift left” approaches, continuous testing, and other quality activities are necessary to keep pace with the incremental deliveries of customer-focused organizations.
These practices are often described using the concept called “built-in quality”.
Additionally, there is also a move to “shift right”.
“Shift right” practices and activities focus on observing and monitoring the solutions in the production environment and measuring the effectiveness of that software in achieving the expected business outcomes.
These practices are often described using the concept called “observability”. 👁 Moving to a quality assistance approach provides many opportunities to reinforce the view that quality is a whole-team responsibility across the entire organization.
One way is for the organization's management to support collaboration within expert groups, often known as communities of practice (CoP).
The expert groups main goal should be to go to places where the work happens and work with delivery teams to spread knowledge and behavior.
A successful implementation of quality assistance as a quality management approach results in:
The organization developing a continuous approach to quality with a collaborative quality focus and automated tests
hand-offs for test activities that slow down value delivery
Less dependence on testing late in the delivery process, which reduces the overall cost of quality
There are many other positive outcomes of quality assistance, which will be covered in later chapters.
ad 1/ Naming convention of roles differs from organization to organization.
⚙ R-3.2.4 Jabes vision: Servicing the Chief Product Officer
The jabes framework and tooling must have a sponsor aside other stakeholders.
What would be the most logical sponsor?
💰 The one that accountable for a product, responsible for managing what is in the portfolio.
There is a fiancial budget needed. ❿ Product quality is intangible in the flow of a product in the complete product lifecycle.
The consequence should be: clear accountabilty and resonsibilities for all involved in the flow.
The postion of A CPO.
In a figure:
See right side.
Business Analyst supporting product delivery,
Safety Analyst in the feedback loop: risks & technical performance.
Improvement idea: 💡👁❗ A clear organisational role for the Chief Product officer.
At "control & command" (I-C6isr) for more what roles are needed and are changing.
R-3.3 Continous improvement of Safety
In general, compliance means conforming to a rule, such as a specification, policy, standard or law.
Governance, risk management, and compliance are three related facets that aim to assure an organization reliably achieves objectives, addresses uncertainty and acts with integrity.
Building up in mind set complexity:
Logical (genba-3): Understandable technology
Conceptual (genba-2): Basic Safety Service provision
Contextual (genba-1): Underpinned Safety by decisions
⚙ R-3.3.1 Safety some principles for realisations
Generic stages DDD
The stages in the safety:
Deter threats in information processs
Detect threats in information processs
Defend against threats when became real
Generic guidelines
How to manage technology?
International standards such as ISO/IEC 27002 help organizations meeting regulatory compliance with their security management.
RFC2307 attributes allows storing Unix user and group information in an LDAP directory.
Using Active Directory (AD) with Linux integration, this decreases complexity for identities.
Kerberos is a protocol for authenticating service requests between trusted hosts across an untrusted network, such as the internet.
Kerberos support is built in to all major computer operating systems.
A seen statement: "All it takes is one person in an organization to click on the wrong link (email), and then the hackers are in."
😉 Reply: "If so, forget phishing gamification, you have no chance.
Pro tip: Take technical measures.
➡ You mitigate the vast majority of your risks with things like SPF, DMARC, DKIM, firewall settings, end point security, monitoring, segmentation, phishing resistant MFA methods, allowlisting etc etc.
When it comes to cybersecurity measures, there is always a trade-off between security and ease of use."
Functional gap: Ethics conduct code
🤔 An attitude is important for this role, an ethical mindset. From ACFE (Association of Certified Fraud examiners) .
A code of professional ethics. Replacing the special ACFE case to "-"
The work that - professionals perform can have a tremendous impact on the lives and livelihoods of people and organizations.
It is therefore crucial that members - exemplify the highest moral and ethical standards.
All - must agree to abide by the Code of Professional Ethics.
- shall, at all times, demonstrate a commitment to professionalism and diligence in the performance of his or her duties.
- shall not engage in any illegal or unethical conduct,
or any activity which would constitute a conflict of interest.
- shall, at all times, exhibit the highest level of integrity in the performance of all professional assignments and
will accept only assignments for which there is reasonable expectation that the assignment will be completed with professional competence.
- will comply with lawful orders of the courts and will testify to matters truthfully and without bias or prejudice.
- , in conducting examinations, will obtain evidence or other documentation to establish a reasonable basis for any opinion rendered.
No opinion shall be expressed regarding the guilt or innocence of any person or party.
- shall not reveal any confidential information obtained during a professional engagement without proper authorization.
- will reveal all material matters discovered during the course of an examination which, if omitted, could cause a distortion of the facts.
- shall continually strive to increase the competence and effectiveness of professional services performed under his or her direction.
⚙ R-3.3.2 Safety several basic attention areas
Archiving, Auditing monitoring.
❶ The most forgotten area of requirements to fulfil for a long term stability,
multiple goals for archiving information (data):
Warranty for the product after sales/delivery.
resilience in case of of unexpected events.
Legal requirements, in case for a lawsuit.
Relevant type information: scientific research.
❷ Auditing, monitoring, is better known because external auditors are requiring information for underpinning signing annual financial reports.
❗ The limitation is that is easily getting limited to what those auditors are asking.
💣 Missing the real reason behind those questions: just making the signoffs happening.
Business Continuity Management (BCM)
❸ Lose of assets can disable an organisation of functioning.
Risk analysis is to decide to what level continuity in what time at what cost and what kind of loss is acceptable.
Multiple mitigation options:
Hot stand-by systems. This does not cover logical corruption (ransom).
Cold stand-by systems. There is some point in the paste: a recovery point RPO (backup).
Time is needed to achieve recovery (RTO).
A dedicated backup-restore, journalling approach for any type of isolated "application".
❗ An important issue: what components could get compromised at the same time.
Without any isolated verified fall back approach, it is possible not being able to recover.
💣 BCM has visible costs for implementations but no visible advantages and/or profits.
logging monitoring.
❹ Logging is tracing the events in a system, goals are:
Knowing who is / has worked on some process. For how long and what resulted as of the actions.
Regular verifying the state is and what the expectation of those according the administration.
Differences to be solved, root cause analyses for avoidance in the future.
This is logical in a physical product approach. 🤔 Why should an administrative product be different?
Decoupling will give discrete decision options for operational actions.
Knowing the required transformations to achieve prescribed results insight can be achieved on the cost/profit.
🚧 ❗ The Security Operations Centre (SOC) is a spin off with the tasks of evaluating monitoring and reaction on events that possible are compromising integrity of systems and/or breaching information.
⚙ R-3.3.3 Safety standards, body of knowledge
A 360-degree safety view
From What would a 360-degree approach to cyber security look like for the organization?
A 360-degree approach to cybersecurity for an organization would involve a comprehensive and holistic approach to protecting the organization's assets, both online and offline, from cyber threats.
The approach should address all aspects of cybersecurity, including people, processes, and technology.
ome of the key elements of a 360-degree approach to cybersecurity include:
Conducting regular risk assessments.
Providing regular training to employees on how to identify and prevent cyber threats.
Implementing strict access controls to limit access to sensitive data and systems.
Implementing firewalls, intrusion detection systems, and other security measures.
Data encryption & backup solutions, goal: Data protection & business continuity.
Developing and testing incident response plans.
Ensuring that the organization is compliant with regulations and standards.
Monitoring network and systems for suspicious activity. Actions in case of any breach.
Regularly assessing the security protocols of third-party vendors and partners.
Communicating the cyber security plan and policies to the employees and stakeholders.
Implementing a 360-degree approach to cybersecurity requires a significant investment of time and resources, but it is essential for protecting the organization's assets and ensuring the continuity of business operations.
In a figure:
See right side.
Content in the figure for the topics is weird.
Restructuring what is there and reordering topics: ❺ Issue missing situational awareness of the business application.
Org: Application - Data, Code security
Org: Risk governance & compliance
👁
Authentication & On-Boarding
👁
ISO 27001/HIPAA/PCI, SOC
👁
Data Encryption
👁
Firewall Compliance and Management
👁
Data Leakage Prevention
👁
Physical and Logical Reviews
👁
Secure Coding Practices
👁
Audit and Compliance Analysis
👁
Secure Code Review
👁
Configuration Compliance
👁
Penetration Testing
👁
❻ At the header "application" it in fact about platforms, tools.
Tech: Mobile security
Tech: Platform security
👁
Rogue Access Point Detection
👁
Web Application Security
👁
Wireless Secure Protocols
👁
Web Application Firewall
👁
OWASP Mobile Top 10
👁
Database Activity Monitoring
👁
Mobile App Automated Scanning
👁
Content Security
👁
Dynamic Mobile App Analysis
👁
Secure File Transfer
👁
Mobile Penetration Testing
👁
OWASP Top 10 and SANS CWE Top 25
👁
Log and False Positve Analysis
👁
Testing for vulnerability Validation
👁
👁
Platform Penetration Testing
👁
👁
Log and False Positve Analysis
❼ Issue missing situational awereness of the business application.
Tech: Advanced threat protection
Tech: Network security:
👁
Botnet Protection
👁
Firewall Management
👁
Malware Analysis and Anti-Malware Solutions
👁
Network Access Control
👁
Sandboxing and Emulation
👁
Secure Network Design
👁
Application Whitelisting
👁
Unified Threat Management
👁
Network Forensics
👁
Penetration Testing
👁
Automated Security Analytics
👁
❽ What has the header of infrastructure is only a part of infrastructure.
Infra: Network security:
Infra: Systems security:
👁
DNS Security
👁
Windows/Linux Server Security
👁
Mail Security
👁
Vulnerability/Patch Management
👁
Unified Communications
👁
Automated Vulnerability Scanning
👁
Remote Access Solutions
👁
Security Information and Event Management
👁
Intrusion Detection/ Prevention Systems
👁
Log and False Positve Analysis
👁
👁
Zero Day Vulnerability Tracking
❾ The source for eight domains:
In security, the Common Body of Knowledge (CBK)
is a comprehensive framework of all the relevant subjects a security professional should be familiar with, including skills, techniques and best practices.
The CBK is organized by domain and is annually gathered and updated by (ISC) (International Information Systems Security Certification Consortium) to reflect the most relevant topics within the industry.
The eight CISSP domains are the following:
Security and Risk Management. ➡
Governance dealing with risk management concepts, threat modeling, the security model, security governance principles, business continuity requirements, and policies and procedures.
Asset Security. ➡ Topics that involve data management and standards, longevity and use, how to ensure appropriate retention and how data security controls are determined.
Security Engineering. ➡ The security engineering processes, models and design principles, including database security, cryptography systems, clouds and vulnerabilities.
Communications and Network Security. ➡ Network security, the creation of secure communication channels, such as secure network architecture design and components including access control, transmission media and communication hardware.
Identity and Access Management. ➡ System access, authorization, identification and authentication, including access control and multifactor authentication.
Security Assessment and Testing. ➡ Tools needed to find vulnerabilities, bugs and errors in code and system security, as well as vulnerability assessment, penetration testing and disaster recovery.
Security Operations. ➡ Deals with digital forensic and investigations, detection tools, firewalls and sandboxing, as well as incident management.
Software Development Security. ➡ How to build and integrate security into the software development lifecycle. For secure development NIST.SP.800-218
😱 The CCISP outline does not mention the difference between platforms and business applications.
⚙ R-3.2.4 Jabes vision: Service to Chief Safety Officer
Issue to solve: lack ofawareness ❌ for the difference between platforms, middleware, and business applications.
The CISO role is only technical and unclear for accountabilities responsibilities. ❿ Safety is intangible in the flow of a product in the complete product lifecycle.
The consequence should be: clear accountabilty and resonsibilities for all involved in the flow.
The position of a CSO.
In a figure:
See right side.
Safety Analyst supporting delivery monitoring,
Business Analyst in the feedback loop: performance & risks.
Improvement idea: 💡👁❗ A clear organisational role for the Chief Safety Officer.
At "control & command" (I-C6isr) for more what roles are needed and are changing.
R-3.4 Information processing adding value
Improving the flow of value streams is the demand by an organisation.
Being purposeful in continous improvement is the value of ITC services.
💡 There is an issue:
Understanding what to improve requires a minimum level in understanding the organisational knowledge for missions.
Knowing what and how to improve is learned by seeing what is going on, where constraints are.
Changing an existing culture for doing work is a npk hard problem.
⚙ R-3.4.1 Focus: value stream at the shopfloor
Delivering a product in a pull push cycle
What Is Systems Architecture And Why Should We Care? (G.Alleman) 2024)
If we were setting out to build a home, we would first lay out the floor plans, grouping each room by function and placing structural items within each room according to their best utility.
This is not an arbitrary process – it is architecture. Moving from home design to IT system design does not change the process.
Grouping data and processes into information systems creates the rooms of the system architecture.
The result of deploying an architecture is arranging the data and processes for the best utility.
Many of the attributes of building architecture apply to system architecture. Form, function, best use of resources and materials, human interaction, design reuse, design decisions' longevity, and resulting entities' robustness are all attributes of well-designed buildings and computer systems.
...
❗ SDLC is associated with a software the change in this is: it is about systems.
By adopting a system architecture motivation as the basis for the IT Strategy, several benefits result:
Business processes are streamlined
a fundamental benefit to building enterprise information architecture is the discovery and elimination of redundancy in the business processes.
In effect, it can drive the reengineering of the business processes it is designed to support. ...
Systems information complexity is reduced
the architectural framework reduces information system complexity by identifying and eliminating redundancy in data and software.
The resulting enterprise information architecture will have significantly fewer applications, databases and intersystem links. ...
Enterprise-wide integration is enabled through data sharing and consolidation
the information architecture identifies the points to deploy standards for shared data.
For example, most Kimball business units hold a wealth of data about products, customers, and manufacturing processes.
However, this information is locked within the confines of the business unit-specific applications. ...
Rapid evolution to new technologies is enabled.
Client/server, object-oriented technology revolves around understanding data and the processes that create and access this data. ...
references:
A Timeless Way of Building, C. Alexander, Oxford University Press, 1979.
“How Architecture Wins Technology Wars,” C. Morris and C. Ferguson, Harvard Business Review, March–April 1993, pp. 86–96.
AI not the silver bullet
microsofts productivity paradox: data fix burnout or track it (F.Ferrer) 2024)
The popular narrative is that AI will eventually replace most jobs, from administrative tasks to more complex roles.
But here’s a controversial take: AI, when used properly, could actually help rehumanize work.
Rather than eliminating jobs, AI can automate mundane tasks, allowing employees to focus on what they do best—creativity, strategy, and problem-solving.
However, the key lies in how AI and data are deployed.
Will AI empower employees, or will it simply create a more automated, less human workplace?
If companies don’t tread carefully, they risk turning data-driven productivity tools into instruments of micromanagement, deepening the very burnout they aim to resolve.
❗ The everlasting productivity paradox is still going strong.
Without solving what is holding back productivity in administrative cyber systems that will be continued.
Value stream understanding
Process mining is reverse engineering the value stream (VSM).
It is far too complicated to starting with process mining without understanding the VSM.
From: "Want to do a process mining project" slides and videos (vdaalst).
🤔 The desing of value stream by humans often assumes seriality where some steps are possible by parallel execution.
With unpredictable external events a process flow showing that is more applicable.
The difference by an ideal process and the reality will become less.
Checkpoints in the progress of a VSM will become necessary.
Simplistic sequential and more complex event driven in a figure:
See left side
🤔 Even then do not expect all process events will follow the expectation form the VSM map.
❗ Not all process events will follow the ideal expectation for a VSM.
Several options of applicable flows can coexist, a reason for this is that parallel execution is possible.
For example:
send Invoice and payment are parallel options
prepare delivery is started after checking invoice and payment
make deleviery and confirm payment are parallel options
Several sequential process flows in a figure:
See right side
When doing measurements markers for these valid possible options must be in place.
⚙ R-3.4.2 An extensive closed loop framework (I)
Ideate a floorplan, abstraction levels
A post triggered a short discussion. It resulted in an idea how to improve information processing to help in decision making.
There are an incredible number of good usable models out there, but we've lost our way by getting too caught up in the tech buzz.
What we're doing is exchanging pretty abstract ideas between people with language and pictures.
At the vertical axis:
Abstract
Words-1
Words-2
1 Context
⇄
1 Governance
1 Direction
2 Content
⇄
2 Organization
2 Form, texture
3 Logical
⟳
3 Information
3 logical Contents
4 Phyiscal
⇆
4 Data
4 Building blocks
5 Details
⇆
5 ICT
5 Features
Of course, there are more words to come up with, it is important that it supports the story, makes it stronger, gives it more structure.
I made the link to Zachman with 3 theoretical and 3 practical interpretations arranged on a vertical ordered axis.
The result is a 6*6 surface that can serve as a floor plan, the lean mindset, genba.
For each cell a detailed "Why" is to answer.
Ideate: "Closed loops"
On the horizontal axis, you can get it ordered, organized with an underlying explanation:
Abstract
Words-1
What
➡
Optimized business operations
how
➡
Innovative service provision
where
➡
Improved decision-making
Who
➡
Control & Command
When
➡
Portfolio change & knowledge assurance
Which
➡
Culture, Behaviour
😉 Lean and Agile are not about a goal of full automation into complicated dashboards.
They are about automating repeatable and time-consuming tasks, so that there is a focus on the tasks with more added value.
Always a balance between automation and human involvement should be in place.
Continuous improvement with learning from all attempts at improvements is central as a culture. 💡👁❗ A new organisational structure for: "Closed Loops".
Development, engineering, architecting and operations, using, exploiting are two complementary human mindsets.
Forcing exchange of people specialised in one of those two to do the other, is not respectful.
In a cooperative team that is able to innovate you need both of them.
preparations
stages: ❶ Optimized business operations ❷ Innovative service provision ❸ Improved decision-making
The floorplan is the map of all 6*6 areas.
What: Optimized business operations
Bills of materials - theoretical plan:
Understand "Decision makers" needs
Create value that meets "Decision makers" needs
Ensure feedback loops with "Decision makers"
Bills of materials - practical realisation:
Ensure feedback loops with "Decision makers"
Implement short delivery cycles for "Decision makers"
Focus on "Decision makers" satisfaction and experience
How: Innovative service provision
Functional Specs - theoretical plan:
use lean principles to avoid the three evils: muda mura muri
use tools that promote collaboration and integration (jabes build, test, operate)
automate where it improves the efficiency of the whole
Functional Specs - practical realisation:
Automate where it improves the efficiency of the whole
Minimize manual processes that are prone to errors
Ensure continuous improvement of tools and workflows
Where: Improved decision-making
Drawings Geometry - theoretical plan:
Encourage a culture of feedback and adaptation
Use retrospectives to learn from mistakes
Preferably implement small iterative changes
Drawings Geometry - practical realisation:
Preferably implement small iterative changes
Ensure teams are continuously evolving their skills and knowledge
Quickly implement changes and test new ideas (Jabes: suggestionbox, backlog)
⚙ R-3.4.3 An extensive closed Loop framework (II)
implementing & usage
stages: ❹ Control & Command ❺ Portfolio Change ❻ Culture, Behaviour
The floorplan is the map of all 6*6 areas.
Who: Control & Command
Operating instructions - theoretical plan:
Create multidisciplinary teams with clear objectives
Measure and analyze every step in the value stream
Use information to make informed decisions
Timing diagrams - practical realisation:
Use information to make informed decisions
Ensure a balance between speed and quality
Eliminate bottlenecks and inefficiencies
Which: Culture, Behaviour
Design objectives - theoretical plan:
Promote transparency throughout the organization
Cultivate a culture of trust and openness
create a blame-free environment where people feel safe to make mistakes
Design objectives - practical realisation
create a blame-free environment where people feel safe to make mistakes
Create shared visions for missions
Value diversity in thinking with methodologies
⚙ R-3.4.3 Retrospective "Closed Loops", Genba-2
"Closed loops"
"Closed loops" refer to systems that continuously monitor, analyze, and optimize processes to improve efficiency and reduce waste.
This integrates data and feedback mechanisms to create a sustainable and efficient flow of resources.
Data Integration and Connectivity: Seamless integration from various sources.
(near) Real-Time Monitoring and Analytics: Continuous monitoring of operations.
Feedback Mechanisms: Systems that learn and adapt based on data insights.
Sustainability: Minimizing waste and maximizing resource efficiency.
This is a departure from traditional linear production models, aiming for circularity and sustainability.
This structure is a subset of the proposal for a new DevOps structure.
The SIAR model
The model was created by experienced unhappiness of other models including the PDCA.
Situation
Initiatives or Inputs
Actions
Requests and Results
For the flows left to right:
The pull is at the bottom right to left
The push is at the top left to right
The cycle is clockwise starting bottom right
Negotiations are controversial in flow continuity, limiting overproduction
The PDCA and DMAIC cycles are the same but these are shifted into diagonals instead the vertical control, horizontal flow.
Act (decide what next) analysing the Situation.
The formal definition Situational awareness
is often described as three ascending levels:
Perception of the elements in the environment,
Comprehension or understanding of the situation, and
Projection of future status.
People with the highest levels of SA have not only perceived the relevant information for their goals and decisions, but are also able to integrate that information to understand its meaning or significance, and are able to project likely or possible future scenarios.
These higher levels of SA are critical for proactive decision making in demanding environments.
Improvement idea: 💡👁❗ A new organisational structure for: "Closed Loops".
At "control & command" (I-C6isr) for more what data literacy awareness is needed.
R-3.5 Adapting Cooperative Communication
Understanding the meaning of intentions is the first step for understanding the information by data sources.
💡 An issue, a paradox :
Knowledge is required in creating thesaurus.
A thesaurus is required to get data literate.
Data literacy is needed to build up knowledge.
Creating an using data sources is by following steps.
Continuous improvements is the path to success.
⚙ R-3.5.1 Genba 1,2,3 using the virtual shopfloor
The kind of decisions to understand
Making Choices in the Absence of Information (G.Alleman) 2024)
Decision-making in uncertainty is a standard business function and a normal technical development process. The world is full of uncertainty.
🤔 Those seeking certainty will be woefully disappointed.
🤔 Those conjecturing that decisions can't be made in uncertainty are woefully misinformed.
Along with all this woefulness is the boneheaded notion that estimating is guessing and that decisions can actually be made in the presence of uncertainty without estimating.
Here's why.
When we are faced with a choice between multiple decisions, a choice between multiple outcomes, each is probabilistic.
If it were not—that is, if we had 100% visibility into the consequences of our decision, the cost involved in making that decision, and the cost impact or benefit impact from that decision—it's no longer a decision.
It's a choice to pick between several options based on something other than time, money, or benefit.
Uncertainty comes in many forms:
Statistical uncertainty Aleatory uncertainty, only margin can address this uncertainty.
Subjective judgment bias, anchoring, and adjustment.
Systematic error need to understand the reference model.
Incomplete knowledge Epistemic Uncertainty, this lack of knowledge can be improved with effort.
Temporal variation instability in the observed and measured system.
Inherent stochasticity instability between and within collaborative system elements.
There are three levels to orchestration support for decision making:
genba-1 Strategic, genba-2 tactical, genba-3 operational.
A complex in interactions similar to the SIAR model in understanding information.
A role of the CAIO is helping others ass central point in the middle.
A complex in interactions similar to the SIAR model in understanding information.
A role of the CDO is helping others undestanding eacht other as central point in the middle.
Value stream understanding
Process mining is reverse engineering the value stream (VSM).
It is far too complicated to starting with process mining without understanding the VSM.
From: "Want to do a process mining project" slides and videos (vdaalst).
❗ Get information of the expected possible flows, for example.
Several sequential process flows in a figure:
See right side
❗ Get information of the desinged possible flows, for example.
Simplistic sequential and more complex event driven in a figure:
See left side
⚙ R-3.5.2 An extensive Data literacy framework (I)
Ideate: data literacy
A Dutch educator, from data to chocolate (PF Oosterbaan) the simplistic goal of data literacy. organised a session.
The content goal of the educator is comparable to How to Lie with Statistics (Darrel Huff 1954).
The book is a brief, breezy illustrated volume outlining the misuse of statistics and errors in the interpretation of statistics, and how errors create incorrect conclusions.
The difference is the educator wants to educate getting into becoming data literate, create understandable results.
The session resulted in an idea what is to communicate.
😉 New is: an ordered flow for data literacy in six ordered steps.
The result is that it becomes actionable interpretable.
There is a logical underlying explanation by dependencies.
The first stages: recognize, read, Understand, Analyse are conforming DIKW flow (pyramid): data, information knowledge, insight.
Added are:
Communicating insight
Act - implement change
The horizontal axis in the six stages, ordered, organized:
Abstract
Words-1
What
➡
Recognize data for information
how
➡
Reading information
where
➡
Understanding information
Who
➡
Analysing - getting insight
When
➡
Communicating insight
Which
➡
Act - implement change
😉 Lean Agile is about avoiding overload and avoiding waste by overload.
Continuous improvement with learning from all attempts at improvements is central as a culture.
There is an exponential growth in data.
The threat of being overloaded by information not knowing anymore what to do next.
The complexity of what is going on by alle detailed information is another overload threat. 💡👁❗ A data literacy structure: "Data driven decision making".
preparations
stages: ❶ Information Read ❷ Data Recognize ❸ Insight Analyse
The floorplan is the map of all 6*6 areas.
What: Recognize data for information
Bills of materials - theoretical plan:
Recognize important sources for data
Prioritize in important sources
Important sources defined as required resource for systems
Bills of materials - practical realisation:
Important sources defined as required resource for systems
Alignment to scales of measurements with operations
Defined scales of measurements for resources as master data
How: Reading information
Functional Specs - theoretical plan:
Understand the meaning of used scales for measurements
Define the storing, archive and destroying for measurements
Defined usage of measurements for resources as master data
Functional Specs - practical realisation:
Defined usage of measurements for resources as master data
Implment Defined scales of measurements for resources as metrics
Realisation of "data collectors" for resources in place and serviced
Where: Understanding information
Drawings Geometry - theoretical plan:
Simple usage of the specified measurements
Getting wisdom insight by interactions of multiple measurements
Encourage a culture of feedback and adaptation
Drawings Geometry - practical realisation:
Encourage a culture of feedback and adaptation
Analytics support for getting wisdom insight by measurement interactions
Scheduled delivery for the simple specified measurements
⚙ R-3.5.3 An extensive Data literacy framework (II)
implementing & usage
stages: ❹ Analysing for insight ❺ Communicating insight ❻ Act - implement change
The floorplan is the map of all 6*6 areas.
Who: Analysing - getting insight
Operating instructions - theoretical plan:
Understand "Decision makers" needs
Create value that meets "Decision makers" needs
Ensure feedback loops with "Decision makers"
Operating instructions - practical realisation:
Ensure feedback loops with "Decision makers"
Implement short delivery cycles for "Decision makers"
Focus on "Decision makers" satisfaction and experience
When: Communicating insight
Timing diagrams - theoretical plan:
Focus on optimizing the value flow
Measure and analyze every step in the value stream
Use information to make informed decisions
Timing diagrams - practical realisation:
Use information to make informed decisions
Ensure a balance between speed and quality
Eliminate bottlenecks and inefficiencies
Which: Act - implement change
Design objectives - theoretical plan:
Promote transparency throughout the organization
Cultivate a culture of trust and openness
create a blame-free environment where people feel safe to make mistakes
Design objectives - practical realisation
create a blame-free environment where people feel safe to make mistakes
Create shared visions for missions
Value diversity in thinking with methodologies
⚙ R-3.5.4 Retrospective lean: Data literacy, Genba-1
What is Data literacy?
Data literacy is the ability to read, understand, create, and communicate data as information. Much like literacy as a general concept, data literacy focuses on the competencies involved in working with data. It is, however, not similar to the ability to read text since it requires certain skills involving reading and understanding data.
Data literacy refers to the ability to understand, interpret, critically evaluate, and effectively communicate data in context to inform decisions and drive action. It is not a technical skill but a fundamental capability for everyone, encompassing the skills and mindset necessary to transform raw data into meaningful insights and apply these insights within real-world scenarios.
The data, information life cycle
The six life cycle stages:
Plan: what kind of information is needed
Design: preparation with scale of measurements
Realisation: create data collectors
Manage: store, archive or destroy the information
Usage: simple usage of the specified measurements
Insight: Getting wisdom insight by interactions of multiple measurements
Feedback loops play an integral role in customer service and business processes.
Creating a feedback system involves several key steps to ensure that feedback is collected, analyzed and acted upon effectively.
Data literacy is a requirement for able to work with measurements in closed loops.
The goal of closed loops is better objective decisions.
A retroperspective is a subjective discussion what could be improved.
Just a feedback signal is without any expectations.
Improvement idea: 💡👁❗ A data literacy structure: "Data driven decision making".
At "control & command" (I-C6isr) for more what effective efficient results are possible.
R-3.6 Maturity 5: ICT solutions adding value
Continuous improvement using BI&A, business intelligence & analytics for closed loops is the principle with lean.
From the three ICT, ITC interrelated scopes:
✅ I - processes & information
✅ T - Tools, Infrastructure
✅ C - Organization optimization
Only having a focus on technology will fail by missing what a strategy for an organisation is about.
⚖ R-3.6.1 Mindset prerequisites
Systems Philosophy
Purposeful systems (book 1972),
R.L. Ackoff was an American organizational theorist, consultant, and Anheuser-Busch Professor Emeritus of Management Science at the Wharton School, University of Pennsylvania.
Ackoff was a pioneer in the field of operations research, systems thinking and management science.
The influence on systems thinking:
Any human-created systems can be characterized as "purposeful system" when its "members are also purposeful individuals who intentionally and collectively formulate objectives and are parts of larger purposeful systems".
Other characteristics are:
"A purposeful system or individual is ideal-seeking if... it chooses another objective that more closely approximates its ideal".
"An ideal-seeking system or individual is necessarily one that is purposeful, but not all purposeful entities seek ideals", and
"The capability of seeking ideals may well be a characteristic that distinguishes man from anything he can make, including computers".
Levels in Lean Agile
Levels in lean: How Many Genbas are There? (B.Emilani 2019).
There is no link to te source anymore. He has chosen not to share the ideas anymore wiht te reason of personal financial profits being more important than improvements by sharing jigsaw parts knowledge.
The world of leveled genba-s:
My focus since the mid-1990s has been Genba 1 — the mind of leaders.
For me, Genba 1 is the most interesting genba by far; specifically, leaders’ mindset, thinking, decision-making (including no decision), and actions (including no action).
Genba 1 is the most challenging because it does not reveal the truth as easily as Genba 3 and Genba 2.
In fact, Genba 1 actively seeks to conceal and subvert the truth, sometimes unknowingly.
Genba 1 seems to be an unbreakable enigma, but the code can indeed be cracked.
.. Simple causality informs us that Genba 1 is what allows Genba 3 and Genba 2 to happen or not happen.
The fundamental problem that I have long pursued is:
Why doesn’t Genba 1 allow Genba 3 and Genba 2?
Related questions include:
Why does Genba 1 dislike Lean transformation?
Why does Genba 1 limit Lean to the use of certain tools?
The leveled genbas idea in a figure,
see right side.
There is lot of frustration behind those questions.
A lot of valuable collected information is behind that.
The split in the several levels gives a direction for what is going on:
Lean has been commercialized for doing changes at the shop-floor, genba-3.
Involvement for required changes at genba-1 genba-2 was not included.
Strategical visions are missing or what is stated as strategy is not strategical.
Although there is an extensive history for scientific management, genba-1, the translation for correlated requirements and changes at the other levels was never made.
Genba-2 left clueless for translations from visions into missions.
Genba-3 suffering by fragmented attempts, cargo cult (Y-2.6.1 I-Jabes).
Going for lean, Genba-1
Lean in a 3*3 plane is innovative idea. It is a result of combining the SIAR model to lean.
The lean philosophy in a 9-plane figure,
see right side.
Not only:
🎭 pillars, bars,
🎭 or diagonals,
🎭 edges - moderators
🎭 but also repetitive clockwise cycles.
Each pillar is bottom up from a novice to a master mind. ❶ ❷ ❸ The left pillar is what organisational leaders are supposed to do.
In the TPS example:
Set by the Toyadas.
The culture of the Japanese country at that moment in time enabled this.
❹ ❻ The middle pillar is what a technical leader can do.
In the TPS example:
set by T.Ohno, a brilliant technician and a dissonant.
the success by a global society moment of change highlighting effective efficiency competitive advantage.
❼ ❽ ❾ The right pillar is where an advisor can help.
It is the culture set at pillar-1 that enabled this.
❺ In the middle the strategic goal and the threats.
In the TPS example: set by the Toyadas.
It are the organisational leaders enabling this, but where to start?
Have the strategic goals translated in something measurable, goal: closed loops.
Go around clockwise from the right bottom corner.
Continous repeat the cycle holding of threats improving the set goals.
Improvement idea: 💡👁❗ An approach for going holistic lean agile at all genba levels.
At "control & command" (I-C6isr) to do more investigation.
⚙ R-3.6.2 An extensive DevOps framework (I)
Ideate: Lean at devops
😉 Lean and Agile are not about a goal of full automation.
They are about automating repeatable and time-consuming tasks, so that teams can focus on the tasks with more added value.
There should be always a balance between automation and human involvement.
Continuous improvement with learning from all attempts at improvements is central as a culture. 💡 A post triggered a short discussion. It resulted in an idea how to improve development and operations (DevOps), the information technology service.
devops principles (Dasa 2024)
These steps from Dasa are assuming full automation is the goal, that is not correct when wanting a lean culture.
👐 The real attention point for that is: "culture".
Development, engineering, architecting and operations, using, exploiting are two complementary human mindsets.
Forcing exchange of people specialised in one of those two to do the other, is not respectful.
In a cooperative team that is able to innovate you need both of them.
What would the development line look like?
preparations
stages: ❶ Customer Focus ❷ Processes & Tools ❸ Continous learning & improvements
The floorplan is the map of all 6*6 areas.
What: Customer Focus
Bills of materials - theoretical plan:
Understand customers needs
Create value that meets customers needs
Ensure feedback loops with customers
Bills of materials - practical realisation:
Ensure feedback loops with customers
Implement short delivery cycles for customers
Focus on customer satisfaction and experience
How: Processes & Tools
Functional Specs - theoretical plan:
use lean principles to avoid the three evils: muda mura muri
use tools that promote collaboration and integration (jabes build, test, operate)
automate where it improves the efficiency of the whole
Functional Specs - practical realisation:
Automate where it improves the efficiency of the whole
Minimize manual processes that are prone to errors
Ensure continuous improvement of tools and workflows
Where: Continous learning & improvements
Drawings Geometry - theoretical plan:
Encourage a culture of feedback and adaptation
Use retrospectives to learn from mistakes
Preferably implement small iterative changes
Drawings Geometry - practical realisation:
Preferably implement small iterative changes
Ensure teams are continuously evolving their skills and knowledge
Quickly implement changes and test new ideas (Jabes: suggestionbox, backlog)
⚙ R-3.6.3 An extensive DevOps framework (II)
implementing & usage
stages: ❹ Teamstructure ❺ Value stream management ❻ Culture
The floorplan is the map of all 6*6 areas.
Who: Teamstructure
Operating instructions - theoretical plan:
Create multidisciplinary teams with clear objectives
Measure and analyze every step in the value stream
Use information to make informed decisions
Timing diagrams - practical realisation:
Use information to make informed decisions
Ensure a balance between speed and quality
Eliminate bottlenecks and inefficiencies
Which: Culture
Design objectives - theoretical plan:
Promote transparency throughout the organization
Cultivate a culture of trust and openness
create a blame-free environment where people feel safe to make mistakes
Design objectives - practical realisation
create a blame-free environment where people feel safe to make mistakes
Create shared visions for missions
Value diversity in thinking with methodologies
⚙ R-3.6.4 Retrospective lean DevOps, Genba-3
Information, Devops extended as Service
This comprehensive approach provides a more robust and versatile framework for implementing DevOps within an organization.
Each component is approached both strategically and operationally, so that sufficient attention is paid to every aspect of the organization and its transformation process.
It can help organizations take a comprehensive view of DevOps that focuses on customer value, processes, teams, culture, and continuous learning.
The new Devops
The complete structure of related changes & improvements.
South: Details for data literacy:
A data thesaurus life cycle.
A data information life cycle.
East: An extensive closed Loop framework
North: An extensive Data literacy framework.
West: Information DevOps extended assistance Structure
Improvement idea: 💡👁❗ A new organisational structure for: Information Service.
At "control & command" (I-C6isr) to do more investigation.