Direct naar content

DBaaS or DBA end boss

We wrote earlier a blog that there is nothing wrong with managing databases in a slightly older environment. But there is also nothing wrong with more mature database administrators. Thomas Spoelstra – yes, the one from the man cave – is one of those. He knows better than anyone else how to set up a database as efficiently as possible, because “back in the day” there simply wasn’t more disk space or memory.

As we move en masse to the Cloud, with all its space and ability to scale, those efficiency skills don’t seem so important anymore. Until the cost skyrockets the Cloud. Then you still need someone who can substantively assess and fine tune the data model, in order to reduce your cloud costs. So someone like Thomas…

Thomas Spoelstra

Teamlead en Senior Database Reliability Engineer
Thomas Spoelstra - Teamlead en Senior Database Reliability Engineer

Previous century

You can safely call me an old DBA. I took my first steps into the wonderful world of databases in the last century. In 1989 to be exact. That wonderful world consisted of DMBS systems that younger colleagues have never heard of, such as INFOS-II and dBase III. Data was often stored in flat files and we were masters at generating keys and optimizing storage.

The key, the whole key and nothing but the key

One fine day I was introduced to a relational database system, Sybase 10, running on another relic: OpenVMS. One limitation I remember to this day was the maximum database size of two gigabytes.

The care we had to put different databases on different disks to increase spindle count and performance! Then serious thought had to go into the design of the data model. Tables had to be kept compact and the data types for each column had to be carefully selected to normalize your data.

The third normal form was considered sacred, where the mantra was: every attribute must represent a fact about the key, the whole key and nothing but the key. You really had to have a seriously good reason not to normalize to this magical third normal form. Tuning and optimizing searches was an essential part of the daily tasks.

Fast forward

By now we are a decade or three down the road and see how the world has changed. Hardware is cheaper than ever. In a way, it follows Moore’s law, where machines double in power every few years and have barely gotten more expensive by comparison.

From discrete hard drives as database storage, we moved to RAID5 and had to deal with huge storage arrays with hardly a moving part – aside from fans. Platters and spindles? Hardly anyone was talking about those anymore. Because when the database grows, you just add more storage.

Without blinking an eye. Does performance deteriorate a bit? Then you just add a few CPUs with a few gigabytes of RAM anyway. But this comes with hidden costs.

Scare

With the speed at which we are embracing the Cloud today – and all that comes with it, such as DbaaS – I foresee some traditional DBA skills becoming so slowly less important. To a certain point. We’ve seen this a number of times: companies move to the Cloud and add storage and especially hardware as soon as performance issues arise.

This happens a few times like this, until someone is shocked by the monthly bill. Then suddenly there is a cry: Why are we spending so much on Cloud infrastructure for our database? What should we do to reduce our Cloud spending?

Database neglect

We have assisted a number of clients with this question. One of the things we see recurring is the lack of database maintenance. Rebuilding indexes, reclaiming space, reorganizing tables, all things that are often neglected.

Datatype mismatches in queries is another common occurrence that contributes to poor performance. But also connections between tables that are less than ideal, to say the least; we’ve seen dramatic performance improvements simply by rewriting a query. And these are just a few simple examples.

Clammy hands

As long as DbaaS offerings flood the market (think AWS RDS and AWS Aurora, Azure SQL), such performance problems will persist. A few mouse clicks, your credit card number and hop, your database is in the cloud. But as your organization grows, before you know it, that “small” database of 20 gigabytes has grown to the size of a terabyte. Then just see how your PostgreSQL database keeps performing without getting clammy hands.

More limitations

And so there are more limitations to the DbaaS concept. On a bare-metal PostgreSQL server, you can install an unlimited number of extensions, whereas a DbaaS instance severely limits you from doing so. On a bare metal instance, you can install an extension and experiment with hypothetical indexes and partitions before taking the step of creating the index or partitioning the table. Partitioning a huge table is no easy task and must be worth the time and effort. Easy to do on your bare-metal, but not supported on DbaaS instances.

Endangered with extinction

Unfortunately, it seems like traditional DBA skills are slowly dying out, which is a shame. After all, who can then substantively review and fine tune your data model to reduce your cloud costs? Who can then still spend time reviewing queries and data models, and optimizing them for maximum performance?

Who can then spend time creating a proper database maintenance plan? Who will then be able to advise and coach developers on best practices?

What about your DBA skills?

Do you still have them, those old skool DBA skills? Or would you really like to develop these skills? Then we are looking for you to join our team.

Is your database fit enough?

Feel free to contact us if you want to know more about the possibilities of a QuickScan or HealthCheck on your environment or if you are concerned about the performance of your databases. Want to know more about the route to effective database management? Then download our whitepaper here.