12 database security landmines, failures, and mistakes that doom your data

In most enterprise stacks today, the database is where all our secrets wait. It’s part safe house, ready room, and staging ground for the bits that may be intensely personal or extremely valuable. Defending it against all incursions is one of the most important jobs for the database administrators, programmers, and DevOps teams that rely upon it.

Alas, the job isn’t easy. The creators give us all the tools. They build in good security measures and document them. Yet dozens of potential errors, oversights, and mistakes both stupid and understandable make it an endless challenge.

 

To help keep track and stay on our toes, here’s a list of different failure modes that have tripped even the best of us.

 

1. Inadequate access management

Many databases live on their own machine and this machine should be as locked down as possible. Only the essential users should be able to log in as a database administrator and the logins should be limited to a narrow range of networks and other machines. The firewalls can block IP addresses. The same rules should apply to the operating system layer, too, and if it’s running on a virtual machine, to the hypervisor or cloud administration. These constraints will slow down the work of updating software and fixing issues, but restricting the paths that attackers can take is worth it.

2. Easy physical access

There’s no telling what a clever attacker might do inside the server room. Cloud companies and co-location facilities offer locked cages inside heavily guarded buildings with limited access. If your data is stored locally in your own data center, follow the same rules by ensuring that only the trusted few have access to the room holding the physical disk drives.

3. Unprotected backups

It’s not uncommon for a team to do a great job securing a database server but then forget about the backups. They hold the same information and so they need the same care. Tapes, drives, and other static media should be locked in a safe, preferably in another location where they won’t be damaged by the same fire or flood that might destroy the originals.

4. Unencrypted data at rest

The algorithms for scrambling data are generally trusted because they’ve been widely tested and the current standards have no publicly known weaknesses. Adding good encryption to the database and the backups is now easy to do for all data at rest. Even if the algorithms and the implementations are secure, the keys must also be carefully protected. Cloud providers and server developers are creating trusted hardware that sits apart from the average workflow so the keys are safer inside. Even if the systems are not perfect, they are better than nothing. When the data will stay encrypted at rest for some time, some prefer a different physical location for the keys, preferably offline. Some even print out the keys and put the paper in a safe.

5. Not using privacy protecting algorithms

Encryption is a good tool for protecting physical copies of the database as long as you can protect the key. A wide variety of good algorithms also scramble the data permanently. They can’t solve every problem, but they can be surprisingly effective when there’s no need to keep all the sensitive data available. The simplest may be just replacing names with random pseudonyms. Dozens of other approaches use just the right amount of math to protect personal data while still leaving enough in the clear to accomplish the goals for the database.

6. Lack of proliferation controls

When data is being used, it’s going to be copied into caches and running servers. The goal for data storage architects is to minimize the number of copies and ensure that they’re destroyed as soon as the data isn’t being used. Many databases offer options for routinely mirroring or backing up as a feature to defend against machine failure. While this can be essential for providing stable service, it pays to think carefully about proliferation during design. In some cases, it may be possible to limit rampant copying without compromising service too much. Sometimes it can be better to choose slower, less redundant options if they limit the number of places where an attacker might break in.

7. Lack of database controls

The best databases are the product of decades of evolution, driven by endless testing and security research. Choose a good one. Moreover, the database creators added good tools for managing and limiting access. You should use them. Ensure that only the right apps can see the right tables. Don’t reuse the same password for all applications. Certainly, don’t use the default. Limit access to local processes or the local network when feasible.

8. Vulnerable secondary databases

Many stacks use fast, in-memory caches like Redis to speed up responses. These secondary databases and content delivery networks often have copies of the same information in the database. Spend just as much time configuring them correctly as you do the main databases.

9. Vulnerable applications with access to data

All the careful database security isn’t worth much when a trusted application behaves badly, especially when the trusted application has access to all the data. One common problem is SQL injection, an attack that tricks a badly coded app into passing malicious SQL to the database. Another is just poor security for the application itself. In many architectures, the application sees everything. If it doesn’t do a good job of blocking the right users, all this data can go out the front door.

10. Risky internet exposure

Databases are ideal candidates for living in a portion of the network with no public access. While some developers want to simplify their life by opening up the database to the general internet, anyone guarding non-trivial information should think differently. If your database is only going to talk to front-end servers, it can live happily on a part of the network where only these front-end servers can reach it.

11. Lack of integrity management

Modern databases offer a wide variety of features that will prevent errors and inconsistencies from entering the data set. Specifying a schema for the data ensures that individual data elements conform to a set of rules. Using transactions and locking prevents errors from being introduced when one table or row is updated and another is not. Deploying these integrity management options adds computational overhead but using as much as possible reduces the effects of random mistakes and can also prevent users from inserting inconsistent or incorrect data.

12. Retaining unneeded data

Sometimes the safest solution is to destroy the database. Development teams often think like packrats, storing away information for a future that may never come. Sometimes the simplest way to protect against data breaches is to erase the data. If you don’t need the bits to provide some future service and the customers will never ask to see it, you can zero out the information and be free from protecting it. If you’re not completely certain that the data won’t be used again, you can erase the online copies and keep only offline backups in deep storage where access is even further limited.

admin
No Comments

Sorry, the comment form is closed at this time.