Exspans Systems Inc Logo home
 
Forum
Sign up Calendar Latest Topics
 
 
 


Reply
  Author   Comment  
automan

Avatar / Picture

Moderator
Registered:
Posts: 136
Reply with quote  #1 
Dev->zOps The mainframe operating environment originated in a glass house. In the 1950-1980s the machines were large, ponderous and expensive. The computers and other hardware devices necessary for data storage and processing, were kept on display in glass enclosed rooms,sometimes the symbolic and practical heart of the enterprise.They were noisy places where printout chattereds off line printers. Where cables lead under raised floor to the multiptude of data terminals. The terminals, where code was developed, and terminals where the the customer representive interacted with the information system.
What a Mainframe was: http://www.mainframes360.com/2009/06/what-is-mainframe-computer.html

Computing power was expensive and so had to be available to process the business of the enterprise as close to 100% of the times.
The expense of large systems computing lead to Service Bureaus, who for a fee would run the users applications in a shared environment.
In a shared environment it was possible to inconvenience many clients simultaneously, when an update went wrong. It has lead to supermarkets not receiving deliveries and banks unable to process funds,

The work these machines did was important and to prevent outages rigorous standards of systems and software upgrade processing was implemented, with a variety of change control programs. At first change procedures required systems personnel to sign off on their confidence in the upgrade. Each change needed to go from development testing to operational performance, on the mainframe, via a route of testing and acceptance, arbitrated by the observations of systems personnel and operations staff.

The systems derived from MVS have accumulated vast amounts of intellectual property, in systems design that define the operating principles of many organizations. Vast databases have been accumulated, and business views to them have been defined by the minds of countless Cobol, PLI and Assembler programmers. What used to happen and how that has evolved now affects how we apply an automated Development to Operations cycle. The way things were done in the past creates issues that we must deal with now to enable successful Deb->Ops.

The Linux and Windows user are able to click on an icon on their desktop and accept program or system upgrades. Every step automated, from the display of the upgrade icon, to the installation, automated, simple and out of mind. To achieve this with z/OS there is a logical set of processes that most occur in order to successfully transmit the upgrades, accept them into a test arrangement then migrate and apply them. Without this happening accurately and predictably, every time, it would not be possible to implement an acceptable model for the Ops side.

It is happening that virtualization and the ability to create operating system copies, tailored to specific requirements, presents new challenges for development of Operational acceptance and installation to production status.

Virtualization makes it quite feasible to bring up a virtual z/OS regions to run automated acceptance testing. A mainframe operating system environment like z/OS follows an accumulated set of processes derived from the idea that many users will require and share its resources. It is at heart a time sharing, multi-programming environment, designed to do work on behalf of a multitude of users at once.

Desktop operating systems like the Windows (XP,7,8,10) based ones make a different set of assumptions at a fundamental level about how they will be used. These assume one user, a panel and interfaces to control and reach out on multiple communications channels. z/OS does not care about window position or presentation. It does not waste time on anything like that. Z/OS is an operating system that likes to get to the point and execute the work as quickly and efficiently as possible, then move on and service another thread. The principle of never developing on a production image has been long established on these large systems. The developer will always work on the users application on a development system separated from production system that will eventually receive the work as an update.


There is and always has been a natural separation in the action from Dev to Ops. It is important that processes operate smoothly and independently so that the next time that update is thought of is when the user finds it offered to them by the update notification procedure.

For Dev->Ops in a mainframe environment to work, it requires that a consistent set of well defined sequences take place, in an orderly, well defined manner. The sequence must recognize errors and fail harmlessly, having made no change. If an operational installation breaks anything, backout must happen automatically.

What this means is that programmed instructions must be issued to initiate logical sequences or work, then watching for responses and choosing appropriate counter-responses. The goal is to ensure that an update works every time, in every circumstance. That 100% confidence can be had that the update will be received and implemented, and the only thing anyone will notice is the enhanced processing provided by the upgraded function. It is quite possible to reduce the operations of a typical z/OS configuration to a sets of scheduled and regulated work, that only needs operational support in emergencies, when the problem is physical and cannot be handled by logic.

MVS, from which z/OS derives, has bequeathed us a set of well defined interfaces. It was intended that the user tailor the OS to their needs by inserting tables, parameters and exit programs. These enables us to exercise strict control through simple API functions. The systems derived from MVS have accumulated vast amounts of intellectual property, in systems design that define the operating principles of many organizations.


The field of computing is changing, defining purpose to different technologies that must interact with each other, to achieve the corporate ends. No one technology can serve all ends and each technology in the line between need and service, must be identified, designed, built and put in place.


Systems managers would like it, that should there be an emergency, and their main systems blew out, that they could instantly and with full confidence give a command to have a duplicate of their operations up and running with out problem elsewhere. This kind of confidence can only come from serious automation and systems control. Automation is required to test the circumstances and arbitrate the recovery procedure. It is needed to ensure that everything needed is available and up. Automation attempts to deal with and respond to situations and only contacts human repair technicians if there is a problem that can not be overcome by electronics and logic.
0
automan

Avatar / Picture

Moderator
Registered:
Posts: 136
Reply with quote  #2 
The design of solutions to bring z/OS into the Dev-Ops world lies in understanding their past.


 
Attached Files
pdf devOps.pdf (124.69 KB, 3 views)

0
Previous Topic | Next Topic
Print
Reply

Quick Navigation:


Create your own forum with Website Toolbox!