Top 10 DB2 Support Nightmares & How to Avoid Them #4

Here is part four in our top ten DB2 support nightmares series. Here we look at DBA performance!

Posted in DB2, DB2 Support, Remote DBA | Tagged , , | 1 Comment

Feeling Unsupported?

Sometimes it is necessary for organisations to stay on older, unsupported database versions. There are many reasons why an organisation may choose or be forced to stay on an out of support version of DB2:

- Cost – an upgrade can be expensive both in terms of licence fees and manpower to manage the upgrade

- Time – the time involved for the upgrade may mean that key staff would be pulled off everyday duties for too long

- Risk to existing applications – some applications may only support DB2 up to a certain release. Moving to a newer version could therefore cause serious problems for the running of the application

- Good old inertia! Yes, sometimes organisations simply can’t find the time to think about an upgrade and “if it ain’t broke , don’t fix it

Any of this sound familiar? Whatever the reason for staying on an unsupported version of DB2 it can mean a considerable amount of risk should any problems arise.

Consultancy on Demand is available on all versions of DB2 including those which are officially out of support with IBM. This means that even if you’re forced to stay on an officially unsupported DB2 version you can still benefit from access to top DB2 skills when you need them.

Using Consultancy on Demand for out of support DB2 versions is a great “insurance policy” for your DB2 database. Find out more about Consultancy on Demand

IBM DB2 for LUW out of support versions

Posted in CoD, DB2, DB2 Support | Tagged , , | 1 Comment

Top 10 DB2 Support Nightmares & How to Avoid Them #3

Here is part three in our top ten DB2 support nightmares series. Here we look at why organisations can’t get by without a skilled DBA.

Posted in DB2, DB2 Support | Tagged , , | 1 Comment

Do you know what’s going on under the covers?

It is often the case that software applications are shipped alongside their preferred database, installed and used effectively with no issues. It doesn’t usually make too much difference what the database of choice by the application vendor is, as long as things are running well then support for the underlying database is not considered an issue. However, from time to time we hear from organisations who have purchased an application and are experiencing issues with a database of which their IT team has no experience because it has been included as part of an application implementation.

Applications such as Maximo, SAP, Tivoli Storage Manager and IBM Business Process Manager all recommend DB2 as the preferred database. Organisations who run Oracle or SQL Server can hit problems if they experience any DB2-related issues and don’t have the inhouse resource to deal with it. The software vendor will usually provide excellent support for the application itself but it is important to ensure that the underlying database is looked after. You need to know what’s going on “under the covers” to keep your application running smoothly.

Training Oracle or SQL DBAs on DB2 can be expensive and probably not a great use of their time, given that the majority of their workload is not going to include DB2. Similarly, hiring a DB2 DBA to look after the database in this scenario is also going to prove expensive and is likely not necessary day to day. However, on that odd occasion when organisations do need some help with the DB2 database which is underlying their application then a Consultancy on Demand contract may be the answer. Simply purchasing 20 hours of DB2 consultancy means that if any issues arise you can call Triton who will be able to provide a consultant, either onsite or remotely to get things resolved. This is a far more cost efficient way of dealing with this type of issue. Triton’s team of DB2 experts will definitely be able to pinpoint what’s going on in the DB2 database under the covers of your application.

Find out more about Consultancy on Demand

What our customers say about Consultancy on Demand 

 

Posted in DB2, DB2 Support | Tagged , , | 1 Comment

Julian Stuhler talks about DB2 11 – Big Data & Analytics

Triton’s Solutions Delivery Director Julian Stuhler talks about DB2 11 – The Database for Big Data & Analytics – listen and see how DB2 11 can help your businesses reduce costs.

Read Julian’s latest article for IBM Data Magazine – How to Have a Happy Conversion Mode Day

Download the white paper - How to Have a Happy DB2 11 Conversion Mode Day

Posted in DB2 11, DB2 11, Julian Stuhler | Tagged , , , | 1 Comment

INGEST as an ETL Tool – Don’t miss Mark Gillis on the DB2Night Show Tomorrow!

Listen to the REPLAY

Our very own Mark Gillis will be making his debut appearance on the DB2Night Show tomorrow – make sure you tune in!

Mark is one of our top DB2 experts working on DB2 LUW and supporting a wide range of clients within the RemoteDBA and consulting teams here at Triton.  He has recently written some blogs and articles on DB2 Ingest and has conquered several performance challenges! During this show, Mark will focus on his recent successes with using INGEST as a robust ETL tool plus provide some general information and advice on INGEST.

INGEST is promoted as “Continuous Data Ingest” to reinforce its ability to stream data into a target database, without interruption and with high processing performance. What isn’t so obviously promoted is the sophisticated mechanism INGEST provides to process data from a source system, transform or enrich the content, apply it to the correct data stores using embedded metadata, and do to so with a single command. It can leverage existing DB2 features to, in effect, perform your ETL processing without using any other software than DB2 itself.

Register here

Posted in DB2, DB2 LUW | Tagged , | Leave a comment

Happy 50th Birthday Mainframe!

To celebrate Mainframe’s 50th birthday some of our Triton Consultants share their favourite mainframe stories:

My first contact with the mainframe was as a lowly graduate Trainee Programmer at a large chocolate manufacturer. All programs were stored on punch cards; partly because they were very old code and partly to give us youngsters a sense of history. As part of my training I was slated to do a few nights of ‘Ops’, including a graveyard shift. Much of the work was collecting vast sheets of green-lined print out from the laser printers and part of it was loading programs to be read in and executed on the mainframe. One particular program was something that had been written to calculate the potential cocoa bean crop. It was a big enough program that it had to be delivered on a trolley, from the card store, down to the mainframe room and then inserted into the reader in blocks about the size of a house brick.

Well, I managed to wheel the trolley down to the mainframe room, and then sat there for about an hour feeding cards into the reader. The mainframe digested this code and went off to calculate the results, a process that was going to take about another 4 hours, so I started to wheel the trolley back out of the data centre over to the long-term card-store, through the car park. Unfortunately, as I was making my way through the car park (it was about 4 a.m. and this was back in the days when I had a social life, so I probably wasn’t at my most alert), I put one wheel in a drain and tipped the whole cart over. Punch cards everywhere.

I’d like to say that I owned up, dragged the whole lot back inside and spent hours and hours putting all the cards back in the right order. What I in fact did, was stuff the whole lot back into boxes willy-nilly, on the grounds that it was going to be 12 months before the thing needed to be run again. I doubt if the mainframe made much sense of the program the next time it saw it but, if it did succeed in reading the thing, I’m pretty certain it came up with a wildly inaccurate forecast for cocoa bean production.

Anonymous, Triton Consultant!

 

When I started working with mainframes in 1985, my first employer had just moved away from the use of punched cards as a means of inputting programs and data to the IBM machines (an IBM S/370 3081 I think). This meant a large number of unused punched cards suddenly became surplus to requirements, and people found some very inventive ways of making use of them – the most popular being prompt cards for speakers to use when giving presentations. I remember bringing home a stack for my mum to use as recipe cards, and she still has a few of them today.

Julian Stuhler, Triton Director

 

As a relative newcomer to mainframes, having missed out on the first 20 years of mainframe history, one of my most vivid recollections is relatively recent, from back in the late 1980′s whilst I was working as Computer Operator in a small Data Processing department (IT didn’t exist back in the 1980′s. It was all Data Processing).

Having years earlier consolidated our Sperry Univac and Honeywell based applications onto an IBM 4381 MVS/JES2 complete with banks of tall, shiny, blue 3380 cabinets, we had just performed our first major IBM mainframe upgrade, moving to a 3090 MVS/ESA. All had gone well and availability, particularly compared to the days of Sperry Univac and Honeywell, had improved massively.

A few months on during an unremarkable night shift, whilst determining which takeaway establishment would be lucky enough to provide us with our evening meal (one of the most important responsibilities of a shift leader), there was a ring at the delivery door. No one rang the delivery doorbell other than local kids messing about. And it was a long walk down the dark, echoing corridor from the Computer Room to the delivery door.

“It’ll be kids. Just ignore them” advised one of my team urging me to make a decision on takeaway choice by shuffling a variety of menus in front of me.

Another, longer buzz from the delivery door.

“I’d better go. Even if just to warn the kids off”

I left the strangely soothing hum of the computer room to head down the eerily silent, dark, echoing corridor. To my surprise it wasn’t kids at the delivery door but a delivery man: -

“Evening”

“Evening”

“Parcel for Data Processing Computer Room. Sign here please.”

“But we haven’t ordered anything?”

“Well it’s for this address. From The Netherlands. IBM by the looks of it”

“IBM in The Netherlands? We haven’t called IBM for anything. I’ll sign anyway”

I wandered back down the long, dark, echoing corridor to the safe haven of the Computer Room, puzzling over the mysterious parcel. The unremarkable night shift continued without further excitement or event, other than plentiful amounts of pizza, until daylight broke about 07:00 and soon after another buzz at the delivery doorbell. Someone’s in early I thought. A quick jog down the long corridor and eventually opened the delivery door, but not to someone I recognised.

“Morning. How can I help?”

“Hi. I’m Jeff the IBM engineer. I’ve come to fix one of your 3880 disk controllers. I assume the new part turned up last night from The Netherlands?”

“There’s nothing wrong with our 3880s? And we certainly didn’t raise an issue with IBM, but yes a parcel from the Netherlands turned up last night.”

“Ahh I’ll explain”

Over a cup of tea, Jeff the IBM engineer explained that the new 3090 mainframe configuration also included self-diagnosis software. What had happened was that the self-diagnosis software had identified performance degradation with one of the 3880 disk controllers, realised which component needed replacing, contacted the relevant department to order a new 3880 part from The Netherlands, and scheduled an engineer visit. Self-diagnosis had proactively identified a potential issue, preventing a possible outage. These days the response would have been ‘so what’, but nearly 25 years ago this was the stuff of science fiction, given the internet was in its infancy and unheard of to the vast majority of people. Even email was fairly new.

Often labelled as ‘legacy’ and ‘prehistoric’, even a quarter of a century ago mainframe hardware and software was right at the cutting edge of technology with innovative ways of providing high availability. And self-diagnosis, followed by part ordering and engineer booking, is still something well beyond many other of today’s operating platforms.

Paul Stoker, Triton Director

 

I started work on mainframes in December 1983. Access was via a terminal called a 3277, which was green characters on a black background. I had access to 2 mainframes. One ran an operating system called VM and was used for supporting the email system. The second ran MVS, the forerunner of zOS. It was used to run a system called RETAIN which was IBM’s defect support system. 30 years ago RETAIN was already 24/7 and supporting data mirroring!

You could only be signed onto 1 of the systems at a time. To sign onto the other you had to log off, physically turn a switch and log onto the other.

Nick Smith – Triton Associate Consultant

 

Posted in IBM, mainframe, System Z | Tagged | Leave a comment

IT cost reduction & optimisation top the priority list for Mainframe customers

In their recent Mainframe study, BMC Software asked Mainframe customers what their top 4 IT priorities were. Cost reduction and optimisation came out at the top with 85% of respondents citing Cost reduction as one of their key priorities.

The study went on to ask customers about their 4 Hour Rolling Average (4HRA). This is the workload figure which is used by IBM to determine software costs. Batch jobs in various forms determined the peak for 62% of respondents with prime online processing at 37%.

One of the questions which our consultants are often asked is “how can we reduce our Mainframe software costs”. It can often seem that there is nothing that can be done to reduce these costs. Simply maintaining the status quo and not allowing costs to rise can be a challenge in itself for capacity planning teams.

When looking at batch processing, an unmanaged workload mix can greatly affect whether batch processing is contributing to a large chunk of Mainframe software costs. By unmanaged workload mix we mean that sometimes, without realising it, organisations can be running non-essential batch processing jobs during the prime shift and pushing the workload peak up significantly. Every month an IBM workload report is produced using the Sub-Capacity Reporting Tool (SCRT) and sent to IBM. This is used to determine the peak workload and therefore what charges will be applied for the software used. By carefully analysing the SCRT report and related SMF data it is possible to gain a clearer view of where peaks are occurring. Moving batch workload to a different time may make it possible to bring peaks down and reduce the 4HRA and thus reduce costs.

Badly performing or slow running applications are another source of woe when it comes to pushing up Mainframe costs. The graph below shows an example of MSU usage by application. If the total peak is 1900 MSUs this is what software products will be charged at.  For example, if your CICS application has not been tuned as well as it could be or you have an issue with performance and it is using up more MSUs than it should then the entire peak will be raised and the associated software costs will increase to the peak. There are potentially significant savings to be made by ensuring that the system is running as efficiently as possible.

zTune graph

 

line_graph

These are just a couple of examples of the ways that organisations can reduce their Mainframe software costs. There are many, many more. Triton’s full zTune service looks at each and every one of these options and can bring organisation savings of 5% for a Phase 1 study and a further 10-15% for Phases 2 and 3.

With 93% of respondents in the BMC study indicating that the Mainframe is a long-term business strategy, finding ways to optimise and reduce costs is going to be vital for organisations in the years to come.

Find out more about zTune

BMC Survey

 

 

Posted in System Z, z/OS, zTune | Tagged , , , | 1 Comment

Triton customer CPA Global talk about why Consultancy on Demand works

CPA Global is the world’s top intellectual property (IP) management and IP software specialist, and a leading provider of outsourced legal services. With offices across Europe, the United States and Asia Pacific, CPA Global supports many of the world’s best known corporations and law firms with a broad range of IP and broader legal services.

Triton Consulting provide CPA Global with RemoteDBA services and Consultancy on Demand.  In this podcast we talk to Juandi Abbott, Business Integration Manager, about her experiences of working with Triton:

 

As part of the Consultancy on Demand service Triton have carried out:

• Training and workshop sessions conducted on-site

• Reviewing and updating database and instance level configuration parameters

• Designing and implementing a data placement strategy (including tablespace and bufferpool configuration)

• Designing and implementing a housekeeping strategy (known locally as RRR housekeeping: REORGs, RUNSTATS and REBINDs) specifically interfacing with the customer application

• Performance advice for specific queries and application workloads

• Database-specific assistance during application releases

• Design advice on DB2 features and tools and how these can be used within the application.

Find out more about Consultancy on Demand

Read more case studies

Read more testimonials

 

Posted in CoD, DB2, DB2 Administration, DB2 Health Check, DB2 LUW, DB2 Monitoring, DB2 Support, DB2 Training | Tagged , , , | 1 Comment

Great NEW event from IDUG in the UK – Technical Seminar London 2014

IDUG and IBM have announced a free one day technical seminar in London this April. This is a really special event which will have two of IBM’s top Distinguished Engineers; John Campbell from the Silicon Valley Lab and Namik Hrle from the Boeblingen lab as guest speakers. We are also delighted to announce that our very own Julian Stuhler will be opening the keynote session.

The seminar is being held at the IBM Client Centre in London’s South Bank and there will be a DB2 z/OS and DB2 LUW track.

To find out more and book your place, visit the IDUG website.

 

Posted in DB2, IDUG | Tagged , | 1 Comment