IDUG EMEA 2015 Day 4

Dublin hosted IDUG EMEA for the first time and what a warm welcome we received. Plenty of great company, great food and of course great Guinness.

Day 4 is always tinged with sadness as it is the end of IDUG EMEA for another 12 months. Although not for the busy IDUG volunteers who are already in full planning mode for next year’s event in Brussels. Without the IDUG volunteers there would be no conference, so many thanks to all of them.

It was an early start on Day 4, particularly following the very enjoyable IBM awards party Wednesday evening at the Guinness Storehouse. There were some well deserved winners of awards, regional user groups and individuals, who contribute to the DB2 community throughout the year. And also recognition for Triton Associate Colin Raybould who passed 4 certifications in 3 days. As a souvenir of the IBM event we all received an engraved Guinness pint glass which, given the volumes of ‘The Black Stuff‘ I have consumed over the last few days, is likely to be used for nothing stronger than water. For a while at least.

Back to the early start on Day 4 and a very topical presentation from IBM’s Gareth Jones on performance monitoring of Dynamic SQL. Being a typical DBA control freak, dynamic SQL makes me a little nervous. With static SQL I can view access paths and report on package performance to provide trend analysis and enable early indication of potential problem SQL. Dynamic SQL is a little more problematic. However Gareth’s presentation offered ideas  and processes to explain dynamic SQL and capture performance data. I will certainly be improving dynamic SQL performance monitoring for our customers where applicable.

The final presentation in Dublin was a very interesting closing keynote by Jonathan Adams describing the challenges that the ever growing Digital workload will present. Jonathan was very engaging and an excellent presenter providing valuable insight into what we should be worrying about in the coming years as Digital data demands increase.

It may have been the first time Dublin welcomed us IDUGers, but judging by the Craic that everyone had, I suspect it won’t be the last. Sláinte to all and see you in Brussels, which I’m sure will be just as successful with our very own Iqbal Goralwalla as the IDUG Europe CPC chair.

Posted in DB2, DB2 10, DB2 Administration, IDUG, IDUG EMEA 2015, Paul Stoker, System Z, z/OS, zSeries | Tagged , , , , , , , , | Leave a comment

IDUG EMEA 2015 Day 3

So, Day 3 of this year’s IDUG Europe is in the books, and as usual it’s been a busy one. After all of the frantic activity leading up to Monday’s Triton/DBI party, last night the DB2 Geeks stepped it down a notch and visited the excellent F.X. Buckley restaurant for our traditional Tuesday night Triton team meal. Wow. Several of us (including a couple of VERY serious meat eaters) are on record as saying that was the best steak we’ve ever tasted, and one of the Geeks has gone so far to change his Facebook profile image to a photo he took of his ribeye! I’ll be making a beeline for this place when I’m next in Dublin.

Back at the conference, I’m enjoying my first IDUG in 18 years without any volunteer duties. John Campbell kicked off the day with his usual mix of great technical information and customer war stories, this time focusing on the CPU and elapsed time gains possible in high-volume OLTP applications by exploiting thread reuse and RELEASE(DEALLOCATE). Nothing particularly new here, but it’s surprising how many DB2 10 and DB2 11 customers are leaving some fairly easy performance gains unclaimed by failing to exploit this now that the DBM1 31-bit virtual storage constraints are a fading memory.

Another standout presentation was Bart Steegmans sharing his experiences with Recent DB2 for z/OS Customer Pain Points and How to Address Them. This presentation covered a lot of ground, but each of the case studies was relevant and contained important lessons for us all. Bart finished up with a nice section on exception-based monitoring that could easily have filled an hour on its own! I’m seeing more customers willing to invest in some sort of DB2 performance warehouse, but there are still too many that are not.

Next up was the DB2 Experts Panel, which I was honoured to be asked to participate in this year. The hour set aside for the panel just flew by, and for the first time I can remember we didn’t have time to address all of the pre-submitted questions. I believe that this is a direct result of the new IDUG mobile app making it so easy so submit questions, and we’ll have to work harder to prioritise the questions if this happens again in the future.

Final session of the day for me was courtesy of BMC’s Ken McDonald, who shared several tragically entertaining tales of woe in his “Recovery Tales from Experience” presentation. Ken’s career as a DB2 recovery / logging specialist has put him in a unique position to share some of the common traps that many customers fall into. I suspect most people left the room with a resolution to finally make the time to revisit and test those recovery jobs they wrote 5 years ago. I know I’ll be using some of his war stories as an additional incentive to help my customers focus on the importance of a solid backup and recovery strategy.

Tonight is the eagerly-anticipated IBM customer event at the Guinness Storehouse in the centre of Dublin, so there will be no shortage of the famous Black Stuff. As a keen advocate of bringing the conference here during my years on the Conference Planning Committee and the Board, and it’s great to see so many people enjoying all that this great city has to offer. I hope we return here again, and soon.

Posted in Availability & disaster recovery, DB2, DB2 10, DB2 11, IDUG, IDUG EMEA 2015, Julian Stuhler, System Z | Tagged , , , , , , , | Leave a comment

IDUG EMEA 2015 Day 2

I always enjoy a chat with the locals when we have an IDUG event so I spent the 30 minute taxi journey from the airport to the hotel in conversation with my driver. I think I actually understood about 3 words. It was only today when watching the local news that I realized the expression “Durhurlin” he continually referred to was “The Hurling”: an enormously popular local sport that looks like a combination of Lacrosse and medieval warfare.

Luckily our presenters here are at IDUG are more easily understood. With my own presentation out of the way on Monday afternoon, I’ve been able to relax a bit and concentrate on absorbing the information from other sessions. And there have been some fine presentations. Daniel Luksetich was entertaining and informative this morning and I’ve come away from that with a whole bunch of Advanced SQL Features to play with when I get home. Or maybe I’ll take his advice and curl up with the SQL Reference Manual and a Scotch….

Matt Huras did 2 sessions this afternoon; one on Minimizing Outages and one on BLU Best Practices. There was a huge amount of information and a few ‘Eureka’ moments, particularly in the section on accomplishing rolling upgrades. I found myself mentally composing the email I’m going to send later on to one of our clients saying ‘about that software upgrade concern you have: have I got some good news for you…’.

I was chatting to Mika Lindholm at our Triton bash (or should that be hooley??) last night: he hails from Finland and tells me that the population is only about 5 million. So how come they have such a wealth of DB2 talent? Anyhow, he had some customer experiences to share on the use of BLU and how, sometimes, you’re still going to find row-based tables your best bet. One interesting aside from the audience; in his examples Mika had a large table (8 billion rows) and a ‘moderately’ sized table of 300 odd million rows. There’s probably quite a few of us who are old enough to remember when 300 million rows wasn’t moderate at all; more like huge. It makes you wonder what sort of data volumes we’re going to be dealing with in 10 years from now.

So we have a couple more days of IDUG to go and much more expert opinion to absorb. Thanks to all involved for the hard work in putting this together and to the people of “Dublin’s Fair City, Where the girls are so pretty”. It’s been a great Craic. See you all out in Brussels next year.

Posted in BLU, DB2, DB2 BLU, DB2 LUW, IDUG, IDUG EMEA 2015, Mark Gillis | Tagged , , , , , | Leave a comment

IDUG EMEA 2015 Day 1

Arriving at lunchtime in Dublin the Sunday before the IDUG EMEA 2015 conference kicks off does leave one with half a day to kill. The half day in question was blustery with the threat of rain and so we elected to spend it indoors doing what – it seems – a large number of tourists do here, drinking Guinness. The general consensus was that it was much nicer here than back home, but one member of the party could only manage it (and then with some enthusiasm) with blackcurrant cordial added!

My interests at IDUG always tend towards the systems side of the fence, so I kicked off with Chris Crone’s hardware and z/OS synergy presentation. This contained a huge amount of information (80 foils!) and focused on the technical zSeries enhancements being brought in with z13 and how these relate to DB2. A good introduction on the move away from the “GHz chase” – faster clocks with each new generation – and how the focus is now doing more in parallel, whilst not consuming more power or pushing out more heat.

This was then followed by Kewei Wei’s “Getting the best Query Performance in DB2 11 for z/OS” session. Good sections on APREUSE and managing statistics across releases.

The afternoons sessions kicked off with an excellent presentation from Jarmo Mannikko on deleting quiesced members from a data sharing group and the challenges that KELA faced getting rid of some redundant services.

John Campbell and Flo Dubois then told us all about the availability enhancements in DB2 V11 data sharing. Too many good things to even start to talk about – have a look at the presentation on the IDUG web site!

I finished the day with Jane Man’s excellent presentation on JSON features in DB2 for z/OS. Introduced through maintenance, this is in its early days, but very useful for people looking at mobile applications.

The vendor hall opened and drinks and nibbles were taken before we charged off to the Triton and DBI drinks, held this year in M O’Brien’s pub just down the road from the conference. As well as the tradition Irish band, more food and Guinness, we were also entertained by Paul, Scott Hayes and Julian running a DB2 Bingo session. And more Guinness to celebrate Ireland’s football team qualifying for the European Championships in 2016.

I have a feeling this will not be the last Guinness we have this week.

Posted in DB2, DB2 11, DB2 Geek, DBI, IDUG, IDUG EMEA 2015, James Gill, Julian Stuhler, mainframe, Paul Stoker, z/OS | Tagged , , , , , , | Comments Off

Black Friday 2015 – can your databases cope?

Are you prepared for Black Friday? The 27th November is predicted to be the UK’s first £1bn online shopping day.

The latest in a long line of imports from the US saw £810m spent by UK shoppers in 2014, and reportedly four times as many people plan to take advantage of the Black Friday deals this year.

Then hot on the heels of Black Friday will be Cyber Monday, which this year will coincide with month end on 30th November.

Great news for online retailers – but will the IT infrastructure supporting your Online Retail sites cope? Networks and Web servers are usually the focus of proactive performance improvements to cater for increased transaction workload, but what about database availability and scalability?

Last year’s big spend also brought with it some serious issues for several well-known UK high street brands who experienced severe website downtime, causing financial loss and customer frustration – and when customers can’t buy from you they’ll simply take their business elsewhere to where they can make that purchase.

Ensuring database and application scalability is key – can your systems handle the peak customer transactions during Black Friday and Cyber Monday?

Can you be sure that all of the SQL transactions accessing databases supporting your online retail applications are optimised and using minimal CPU?

Is your failover strategy in place in case the worst happens, and are you certain that your high availability solution is configured to ensure no downtime?

Triton Consulting offer a comprehensive Database Availability service to help you check that your DB2 databases and the applications upon which they rely are ready.

However given Black Friday is nearly upon us, Triton consultants can provide a more focussed Database Availability review specifically for Black Friday: -

  • Conduct pro-active health check to review critical DB2 system settings and assist with identification of potential workload bottlenecks;
  • Evaluate failover strategy and ensure high availability solution is configured appropriately ready for Black Friday;
  • Provide recommendations to improve database scalability, including potential changes specific for Black Friday workloads only, to ensure database and application availability.

For more information on how we can help you in the run up to the busiest online shopping period of the year, please visit or contact us now

Posted in capacity planning | Leave a comment

Join Triton and DBI Software at IDUG EMEA 2015!

With just under two months until this year’s IDUG EMEA conference in Dublin the Triton team is currently busy planning our annual drinks reception. This year we will be teaming up with our business partner DBI Software. Our joint event will take place just around the corner from the IDUG conference at M O’Brien’s on Monday 16th November from 8pm. You can view the location map here.

We’ve had to limit guest numbers, so to guarantee entry simply head on over to DBI Software’s booth 110 on the Monday where you’ll be able to pick up this year’s competition leaflet, which will also double up as your event invitation – so keep it safe!

We also return with a fresh, new DB2 competition to tax your brains. In the usual Triton fashion we have a great selection of prizes to give away to our lucky competition winners.

  • 1 x Motorola Moto 360 Android Smartwatch
  • 1 x Amazon Echo
  • 1 x Philips Hue Connected Lights Starter Pack

We’ll be providing further information leading up to the event, so we suggest subscribing to our newsletter to keep up to date. 

We look forward to seeing you soon.

Posted in DB2, DB2 Geek, IDUG, IDUG EMEA 2015 | Tagged , , , | Leave a comment

Top 10 DB2 Support Nightmares and How to Avoid Them #10

Number 10 in our Top 10 DB2 Support Nightmares series. This month we take a look at what issues arise from over-federating.

Posted in DB2, DB2 Geek, DB2 LUW, DB2 Support, Remote DBA, Uncategorized | Tagged , | Leave a comment

Reduce MIPS to Manage Budget Pressures

Worldwide economic uncertainty in recent years has put significant pressure on CIOs to at least keep IT costs level and more likely push for cost reduction across the board.  Gone are the days when performance issues were relatively easily handled by adding or upgrading hardware and such purchases were part of the routine budget cycle – today everyone is expected to “sweat the assets” and “do more with less”.

One of the most effective ways of reducing or containing mainframe costs is through better management of CPU consumption.  By slowing down growth of CPU usage and managing workload placement, hardware and software upgrades can be deferred thereby allowing organisations to keep costs down and profitability up.

Reasons for MIPS Growth

The main reasons for growth can be categorised as follows:

  • Increased transaction rates.  Whilst it is generally good news for businesses to see transaction rates on the rise, with z/OS usage based pricing this increase in workload can push up software costs and can also negatively impact application performance.
  • More demanding applications.  As enterprises strive to remain competitive, more business operations are being automated and packaged business applications from vendors such as SAP and Oracle are becoming more common.  Such applications are highly capable and flexible but typically require more computing resources than their bespoke equivalents.  Other factors can also result in increased application CPU usage, such as the move to Java-based applications and reductions in development and testing time due to business and cost pressures.
  • Middleware overheads.  IBM is constantly enhancing critical enterprise middleware components such as DB2 and CICS, adding new capabilities to improve productivity and reduce development and operational costs.  However, such enhancements can often entail additional CPU overheads and many organisations find themselves paying the CPU cost for the new function without always being able to exploit the benefits.

The Effects of MIPS Growth

The growth factors discussed above can have a direct impact on the cost and performance of an organisation’s mainframe applications:

Cost – The major driver for many IT teams is the need to reduce mainframe resource usage and thereby potentially defer hardware upgrades and reduce monthly MIPS costs.  There are also human costs to consider: maintaining an underperforming system takes more time and resource from IT teams, and adds pressure from the business teams who are calling for improved response times.

Performance – Typically, any significant increase in the amount of CPU used by a given workload will result in an associated increase in transaction elapsed times.  For performance-critical online workloads, that increase can translate directly into poorer critical business metrics such as customer satisfaction and retention.    

Just throwing more MIPS at a poorly-performing workload does not always address the issue.  A 2-hour response time may be reduced to 1.5 hours with more CPU time being available, but the problem might be due to a poor access path and some expert DBA attention could get it down to 5 seconds. This is especially true of application performance tuning, which is where the majority of performance issues tend to lie.

By analysing your current environment and identifying opportunities to optimise and tune your mainframe workloads you can regain control of spiralling costs.

Tuning Challenges

The key to a successful workload tuning exercise is to know where you’re starting from.  It is vital to understand exactly where workload peaks occur, and how they contribute to software licence costs, before you can really be effective with your workload tuning.  I have seen many large mainframe customers struggle to get a clear view on when their workload peaks occur across all mainframe workloads.  Only once these peaks have been identified can organisations really bring down the cost of their mainframe software licencing through tuning activities.

One of the major challenges in any environment, but particularly with client/server applications, is determining which component is responsible for poor response times, although the tools for this are improving.  Another related challenge is “skills silos”, where a client has the individual skills necessary to resolve a particular issue but no single person has the whole picture and internal culture and politics prevent the individuals from communicating and collaborating effectively. 

Regain Control of Software Charges

The majority of mainframe users have significant potential for reducing resource consumption (and therefore costs) through performance tuning of key workloads.  This is especially true for those with older applications that haven’t been actively maintained for a while or who have lost some of their deep middleware skills through retirement or redundancy.  By tuning these workloads, ongoing software costs can be reduced and mainframe upgrades potentially deferred.  In addition, application performance will be enhanced and overall Total Cost of Ownership (TCO) reduced.

Through the effective analysis and tuning of workload peaks it is typically possible for organisations to reduce their CPU charges by at least 3-5%.  For organisations running very large workloads this can equate to savings of tens of thousands of pounds per month.

The benefits of mainframe workload tuning can be felt across the entire business.  From the CFO who will see significant reduction in IT spend through to the IT teams who benefit from improved application performance and thus improved customer service, a thorough tuning exercise can indeed improve many aspects of business performance.

 Find out more about our Mainframe Cost Reduction Service

Posted in CIO, mainframe, System Z, zTune | Tagged | 1 Comment

How to handle the Mainframe skills gap

It has been an interesting 12 months in the Mainframe world. 2014 saw IBM celebrating 50 years of the Mainframe and in January this year the latest incarnation of the Mainframe – the z13 was launched. Propelling Mainframe back into “cool new” technology territory the z13 has been designed specifically with the mobile economy in mind.

More good news for Mainframe emerged earlier this year from Compuware’s Global CIO Survey which delivered some positive messages around Mainframe’s role within big business:

- 88% of CIOs agree that the Mainframe will continue as a key business asset over the next decade

- 81% of CIOs believe that Mainframe technology continues to evolve

- 78% of CIOs see the Mainframe as a strategic platform that will enable innovation

As heart warming as these messages are for those involved in the Mainframe world they come with a warning for the future:

- 70% of CIOS are concerned about knowledge transfer – with few younger IT professionals heading to the Mainframe to begin their careers and life-time Mainframers approaching retirement this is a very real concern.

- Despite that though 39% of CIOs admitted that they have no specific plans for addressing the Mainframe skills shortage

Plan for change now

Managing the skills transition efficiently will be vital for large organisations to ensure they maintain their critical data effectively and keep competitive in these challenging times.

The Options:


Outsourcing mainframe services is certainly an option but it does bring with it many complications. Outsourcing to a third party means that the ingrained organisational knowledge of those currently managing the system can be lost. Although the outsourcing provider is no doubt highly skilled, they don’t have that intimate knowledge of the organisation which is built up over many years.


Both IBM and CA are putting huge amounts of money and effort into training the next generation of mainframe experts by running education initiatives through universities in the US. However, training university students takes time to filter through the system and we are yet to see this trend crossing over to the UK.

Access to skilled Mainframe resource

It can be difficult to access skilled Mainframe resource. Triton Consulting provide flexible resourcing options to organisations who need to supplement their Mainframe skills. With the option to purchase a block of hours, our Consultancy on Demand solution allows organisations to access key skills as and when they are required. These consultancy hours can be used for training and skills transfer; to provide specific skills where they are lacking in the team or for specific project work. Working alongside the in-house team our highly experienced consultants can enable large Mainframe users to manage their resource requirements in a highly cost effective and flexible manner.

Find out more about Consultancy on Demand

Compuware Global CIO Survey

Posted in CIO, DB2, DB2 Training, IBM, mainframe, System Z, z/OS, zSeries | Tagged , , | 1 Comment

Flexible Resourcing

Flexible IT resourcing can mean different things to different people.  It could be that your organisation needs access to extra IT skills quickly or on a short-term basis to plug a gap in the team due to unexpected leave.  Perhaps a new project means that you will need extra resources or specialist skills.  To some, flexible resourcing means the ability to call on a trusted supplier for resource on an ad hoc basis without the need to go through a protracted contracting process each time.

Some of the most common resourcing problems which we see from our customers are:

-          Ability to gain access to experienced support for specific projects or tasks

-          Need to supplement resourcing for short-term cover

-          Delays to getting resource on site due to long time-frame procurement and contracting processes

Triton Consulting have come up with a solution to help organisations better manage their resourcing requirements in a flexible and cost effective way.  Our solution is called Consultancy on Demand.  With this type of contract you can purchase a block of hours which can be used over a 12 month period for any type of DB2 related consultancy, project work, ad hoc support or training.  You only have to go through the procurement process once when the contract is set up and from then on you have access to the resource you need, when you need it.

Find out more about Consultancy on Demand

Listen to the podcast from our customer CPA Global on how they have used their Consultancy on Demand hours – Listen Now

Posted in CoD, DB2, DB2 Support | Tagged , | 1 Comment