Historical Information & resilencing processes.
Auditing, Logging, monitoring, Retention Policies.
Information Archive.
The data provision with the data explosion is needing a strcutured approach in a value stream.
📚 Information questions.
⚙ measurements data figures.
🎭 What to do with new data?
⚖ legally & ethical acceptable?
🔰 Most logical
back reference.
Contents
Reference | Topic | Squad |
retentions | Archiving, retention policies. | 02.01 |
continuity | Business Continuity. | 03.01 |
logging | logging monitoring. | 04.01 |
auditing | Auditing monitoring. | 05.01 |
What next | Change data - Transformations. | 06.00 |
| Combined pages as single topic. | 06.02 |
Combined pages as single topic.
👓 info types different types of data
👓 Value Stream of the data as product
🚧 transform information data inventory
👓 data silo - BI analytics, reporting
Progress
- 2020 week:44
- Page emptied, made ready for this dedicated content.
Issue not yet having ordered content.
Archiving, Retention policies.
Information is not only active operational but also historical what has happened, who has execute, what was delivered, when was the delivery when was the purchase etc.
That kind of information is often very valuable but at the same time it is not well clear how to organize that and who is responsible.
💣 Retention policies, archiving information is important do it well, the financial and legal advantages are not that obvious visible. Only when problems are escalating to high levels it is clear but too late to solve.
When being in some financial troubles, cost cutting is easily done.
 
Historical and scientific purposes, moved out off any organisational process.
An archive is an accumulation of historical records in any media or the physical facility in which they are located.
Archives contain primary source documents that have accumulated over the course of an individual or organization's lifetime, and are kept to show the function of that person or organization.
Professional archivists and historians generally understand archives to be records that have been naturally and necessarily generated as a product of regular legal, commercial, administrative, or social activities.
(
Archive )
The word record and word document is having a slightly different meaning in this context than technical ICT staff is used to.
In general, archives consist of records that have been selected for permanent or long-term preservation on grounds of their enduring cultural, historical, or evidentiary value.
Archival records are normally unpublished and almost always unique, unlike books or magazines of which many identical copies may exist.
This means that archives are quite distinct from libraries with regard to their functions and organization, although archival collections can often be found within library buildings.
Additional information container attributes.
😉 EDW 3.0 Every information container must be fully identifiable. Minimal by:
- a logical context key
- moment of relevance
- moment received, available at the ware house
- source received information container.
When there are compliancy questions on information wiht this kind of compliancy questions it is often assumed to be an ICT problem only. Classic applications are lacking thes kind of attributes with information.
💡 Additional information container attributes supporting implementations defined retention policies.
Every information container must have for applicable retention references :
- Normal operational visibility moments:
- registered in the system
- information validity start
- information validity end
- registration in system to end
- Legal change relevance:
- legal case registered in system started
- registration for legal case in system to end
- Internal extended archive for purposes:
- registration for archiving purposes in system to end
Common issues when working for retention periods.
⚠ An isolated archive system in complexity reliability and availability being a big hurdle, high impact.
⚠ Relevant information for legal purposes, moved out from manufacturing process and not being available anymore in legal cases, is problematic.
⚠ Impact by cleaning as soon as possible is having high impact. The GDPR states it should be deleted as soon as possible.
This law is getting much attention and is having regulators. Archiving information for longer periods is not directly covered by laws, only indirect.
Government Information Retention.
Instead of a fight how it should be solved there is a fight somebody else is to blame for missing information.
Different responsible parties have their own opinion how conflict in retention policies should get solved.
🤔 Having information deleted permanent there is no way to recover when that decision is wrong.
🤔 The expectation it would be cheaper and having better quality is a promise without warrrants.
Business Continuity.
Loss of assets can disable an organisation to function. It is risk analysis to what level continuity, in what time, at what cost, is required and what kind of loss is acceptable.
💣 BCM is risk based having visible cost for needed implementations but not visible advantages or profits. There are several layers
Procedures , organisational. |
People , personal. |
Products, physical & cyber. |
Communications. |
Hardware. |
Software. |
Loss of physical office & datacentre.
In the early days using computers all was located close to the office with all users because the technical communication lines did not allow long distances.
Using batch processing with a day or longer to see results on hard copy prints. Limited Terminal usage needing copper wires in connections.
 
The disaster recovery plan was based on a relocation of the office with all users and the data centre when needed in case of a total loss (disaster).
For business applications a dedicate backup for each of them aside of the needed infrastructure software including the tools(applications).
⚠ The period to resilence could easily span several weeks, there was no great dependency yes on computer technology. Payments for example did not have any dependency in the 70´s.
Loss of network connections.
The datacentre has got relocated with the increased telecommunications capacity. A hot stand by with the same information on a Realtime duplicated storage made possible.
⚠ The cost argument with this new option resulted in ingorance of resilence of other type of disasters to recover and ignorance of archiving compliancy requirements.
⚠ With a distributed approach of datacenters the loss of single datacentre is not a valid scenario anymore. Having services spread over locations the isolated DR test of a having one location failing is not having the value as before.
Loss control to critical information.
Loss of information, software tools compromised, database storage compromised, is the new scenario when everything has become accessible using communications.
Just losing the control to hackers being taken into ransom or having data information leaked unwanted externally is far more likely and more common than previous disaster scenarios.
Not everything is possible to prevent. Some events are too difficult or costly to prevent. Rrisk based evaluation on how to resilence.
⚠ Loss of data integrity - business.
⚠ Loss of confidentiality - information.
⚠ Robustness failing - single point of failures.
The Swiss cheese model of accident causation is a model used in risk analysis and risk management, including aviation safety, engineering, healthcare, emergency service organizations,
and as the principle behind layered security, as used in computer security and defense in depth.
Therefore, in theory, lapses and weaknesses in one defense do not allow a risk to materialize, since other defenses also exist, to prevent a single point of failure.
Although the Swiss cheese model is respected and considered to be a useful method of relating concepts, it has been subject to criticism that it is used too broadly, and without enough other models or support.
Several triads of components.
Eliminating single points of failure in a backup (restore) strategy. Only the proof of a successful recovery is a valid checkpoint.
3-2-1 backup rules , the 3-2-1 backup strategy is made up of three rules, they are as follows:
- Three copies of data- This includes the original data and at least two backups.
- Two different storage types- Both copies of the backed up data should be kept on two separate storage types to minimize the chance of failure. Storage types could include an internal hard drive, external hard drive, removable storage drive or cloud backup environment.
- One copy offsite- At least one data copy should be stored in an offsite or remote location to ensure that natural or geographical disasters cannot affect all data copies.
BCM is related to information security. It are the same basic components and same shared goals.
An organization´s resistance to failure is "the ability ... to withstand changes in its environment and still function".
Often called resilience, it is a capability that enables organizations to either endure environmental changes without having to permanently adapt, or the organization is forced to adapt a new way of working that better suits the new environmental conditions.
image:
By I, JohnManuel, CC BY-SA 3.0
logging monitoring.
Logging events when processing information is generating new information. The goal in using those logging informations has several goals.
Some loginformation is related to the product and could also become new operational information.
💣 When there are different goals an additional copy of the information is an option but introduces an option of integrity mismatches.
 
Data classification.
Information security
The CIA triad of confidentiality, integrity, and availability is at the heart of information security.
(The members of the classic InfoSec triad confidentiality, integrity and availability are interchangeably referred to in the literature as security attributes, properties, security goals, fundamental aspects, information criteria, critical information characteristics and basic building blocks.)
However, debate continues about whether or not this CIA triad is sufficient to address rapidly changing technology and business requirements,
with recommendations to consider expanding on the intersections between availability and confidentiality, as well as the relationship between security and privacy.
Other principles such as "accountability" have sometimes been proposed; it has been pointed out that issues such as non-repudiation do not fit well within the three core concepts.
😉 Two additionals are:
- Undisputable When the information itself is in dispute that is a serious problem. Needed is the source and time / period relevance of the information.
- Verifiability When not able to that there is no underpinning on usage and any risks.
 
Negelected attentions points:
-
An important logical control that is frequently overlooked is the principle of least privilege, which requires that an individual, program or system process not be granted any more access privileges than are necessary to perform the task.
-
An important physical control that is frequently overlooked is separation of duties, which ensures that an individual can not complete a critical task by himself.
 
An important aspect of information security and risk management is recognizing the value of information and defining appropriate procedures and protection requirements for the information.
Not all information is equal and so not all information requires the same degree of protection. This requires information to be assigned a security classification.
Classified information
When labelling information in a categories an approach is:
- Public / unclassified
- Confidential, intended for circulation in the internal organisation and authorized third parties at owners discretion.
- Restricted, information that should not into disclosure outside a defined group.
- Secret, strategical sensitive information only shared between a few individuals.
Using BI analytics in the security operations centre (SOC).
This technical environment of bi usage is relative new. It is demanding in a very good runtime performance with well defined isolated and secured data. There are some caveats:
⚠ Monitoring events, ids, may not be mixed with changing access rights.
⚠ Limited insight at security design. Insight on granted rights is done.
It is called
Security information and event management (SIEM)
is a subsection within the field of computer security, where software products and services combine security information management (SIM) and security event management (SEM). They provide real-time analysis of security alerts generated by applications and network hardware.
Vendors sell SIEM as software, as appliances, or as managed services; these products are also used to log security data and generate reports for compliance purposes.
Using BI analytics for capacity and system performance.
This technical environment of bi usage is relative old optimizing the technical system performing better. Defining containers for processes and implementing a security design.
⚠ Monitoring systems for performance is bypassed when the cost is felt too high.
⚠ Defining and implementing an usable agile security design is hard work.
⚠ Getting the security model and monitoring for security purposes is a new challenge.
It is part of ITSM (IT Service maangemetn)
Capacity management´s
primary goal is to ensure that information technology resources are right-sized to meet current and future business requirements in a cost-effective manner. One common interpretation of capacity management is described in the ITIL framework.
ITIL version 3 views capacity management as comprising three sub-processes: business capacity management, service capacity management, and component capacity management.
In the fields of information technology (IT) and systems management, IT operations analytics (ITOA) is an approach or method to retrieve, analyze, and report data for IT operations. ITOA may apply big data analytics to large datasets to produce business insights.
Loss of confidentiality. compromised information.
getting hacked having got compromised by whale phishing is getting a lot of attention.
A whaling attack, also known as whaling phishing or a whaling phishing attack, is a specific type of phishing attack that targets high-profile employees, such as the CEO or CFO, in order to steal sensitive information from a company.
In many whaling phishing attacks, the attacker's goal is to manipulate the victim into authorizing high-value wire transfers to the attacker.
Government Organisation Integrity.
Different responsible parties have their own opinion how conflicts about logging information should get solved.
🤔 Having information deleted permanent there is no way to recover when that decision is wrong.
🤔 The expectation it would be cheaper and having better quality is a promise without warrrants.
🤔 Having no alignment between the silo´s there is a question on the version of the truth.
Auditing monitoring.
For legal requirements there are standards by auditors. When they follow their checklist a list of &best practices"e are verified.
The difference with "good practice" is the continous improvement (PDCA) cycle.
Procedures , organisational. |
People , personal. |
Products, physical & cyber. |
Security Operations Center. |
Infrastructure building blocks- DevOps. |
Auditing & informing management. |
Audit procedure processing.
The situation was: Infrastructure building blocks- DevOps Leading. Auditing and informing management on implementations added for control.
Added is: Security Operations Centre, leading for evaluating security risk. Auditing and informing management on implementations added for control.
 
The ancient situation was: Application program coding was mainly done in house. This had changed into using public and commercial retrieved software when possible.
⚠ Instead of having a software crisis in lines of code not being understood (business rules dependency).
It has changed in used software libraries not being understood (vulnerabilities) and not understood how to control them by the huge number of used copied software libraries.
⚠ Instead of having only an simple infrastructure stack to evaluate it has become a complicated infrastructure stack with an additional involved party into a triad to manage.
 
Penetration testing,
also called pen testing or ethical hacking, is the practice of testing a computer system, network or web application to find security vulnerabilities that an attacker could exploit.
Penetration testing can be automated with software applications or performed manually. Either way, the process involves gathering information about the target before the test, identifying possible entry points,
attempting to break in -- either virtually or for real -- and reporting back the findings.
It will only notify what is visible to the tester, using tools only what is commonly known. There is nog warrant that it is not vulnerable after "ecorrections" are made.
It is well posible there is no security risk at all by the way the system is used and being managed.
Legal topics: financials
Links | text reference |
sox (wikipedia)
| The bill, which contains eleven sections, was enacted as a reaction to a number of major corporate and accounting scandals, including Enron and WorldCom.
|
Study and Recommendations on Section 404(b)
| Most importantly, the research demonstrates that the costs of compliance with Section 404(b), including both total costs and audit fees, have further declined since the 2007 reforms.
The cost seen as a problem to be compliant resulted in researches.
|
The Impact of New Regulations on Financial Intermediary Management
|
One of the key problems that financial institutions faced when the financial turbulence started in mid-2007 was the urgent funding need that resulted from a high degree of maturity mismatch.
While assets tended to have a rather long-term horizon, funding of these investments was often done at the very short end of the yield curve in the wholesale markets for liquidity.
|
EBA advises, final Basel III framework
|
The EBA welcomes the improvements introduced in the final Basel III package. These include the introduction of a higher degree of risk sensitivity in the standardised approaches to measure credit and operational risks, and constraints to internal modelling by banks where undue variability of model outcomes was observed in the past.
|
European Insurance and Occupational Pensions Authority (EIOPA)
|
EIOPA's core responsibilities are to support the stability of the financial system, transparency of markets and financial products as well as the protection of policyholders, pension scheme members and beneficiaries..
..
The Solvency II Directive (Directive ?2009/138/EC was adopted in November 2009, and amended by Directive 2014/51/EU of the European Parliament and of the Council of 16 April 2014 (the so-called "Omnibus II Directive")..
|
Legal topics: privacy, copyright
Links eur-lex | text reference |
gdpr
| REGULATION (EU) 2016/679, Data protection natural persons |
law enforcement
| Directive (EU) 2016/680,
Data protection natural persons by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offence
|
on the legal protection of computer programs |
DIRECTIVE 2009/24/EC
Article 5 Exceptions to the restricted acts Article 6 Decompilation
|
copyright - article 17 |
on copyright and related rights 96/9/EC and 2001/29/EC 26 March 2019
1. This Directive lays down rules which aim to harmonise further Union law applicable to copyright and related rights in the framework of the internal market, ...
|
Change data - Transformations
A data strategy helping the business should be the goal. Processing information as "documents" having detailed elements encapsulated.
Transport & Archiving aside producing it as holistic approach.
 
Logistics using containers.
The standard approach in information processing is focussing on the most detailed artefacts trying to build a holistic data model for all kind of relationships.
This is how goods were once transported as single items (pieces). That has changed into: containers having encapsulated good.
💡 Use of labelled information containers instead of working with detailed artefacts.
💡 Transport of containers is requiring some time. The required time is however predictable.
Trusting that the delivery is in time, the quality is conform expectations, is more efficiënt than trying to do everything in real time.
Informations containers have arrived almost ready for delivery having a more predictable moment for deliveriy to the customer.
💡 The expected dleivery notice is becoming standard in physical logistics. Why not doing the same in adminsitrative processes?
Data Strategy: Tragic Mismatch in Data Acquisition versus Monetization Strategies.
A nice review on this, "eOrganizations do not need a Big Data strategy; they need a business strategy that incorporates Big Data" Bill Schmarzo 2020.
Companies are better at collecting data ? about their customers, about their products, about competitors ? than analyzing that data and designing strategy around it.
Too many organizations are making Big Data, and now IOT, an IT project.
Instead, think of the mastery of big data and IOT as a strategic business capability that enables organizations to exploit the power of data with advanced analytics to uncover new sources of customer,
product and operational value that can power the organization?s business and operational models
Combined pages as single topic.
👓 info types different types of data
👓 Value Stream of the data as product
✅ transform information data inventory
👓 data silo - BI analytics, reporting
🔰 Most logical
back reference.
© 2012,2020 J.A.Karman