Monday, August 18, 2014

It's not NoSQL versus RDBMS, it's ACID + foreign keys versus eventual consistency

The Background

Coming from a diverse background and having dealt with a number of distributed systems, I routinely find myself in a situation where I need to explain why foreign keys managed by an acid compliant RDBMS (no matter how expensive or awesome), lead to a scaleability problem that can be extremely cost prohibitive to solve. I also want to clarify an important point before I begin, scaleability doesn't equate to a binary yes or no answer, scaleability should always be expressed as an cost per unit of scale and I'll illustrate why.

Let's use a simplified model of a common web architecture.

In this model, work is divided between application servers (computation) and database servers (storage). If we assume that a foreign key requires validation at the storage level, no matter how scaleable our application layer is, we're going to run into a storage scaling problem. Note: Oracle RAC is this model...at the end of the day, no matter how many RAC nodes you add, you're generally only scaling computation power, not storage.

To circumvent this problem, the logical step is to also distribute the storage. In this case, the model changes slightly and it begins to look something like this.

In this model, one used by distributed database solutions, (including high end acid compliant databases such as Oracle RAC or Exadata or IBM purescale), a information storage is distributed among nodes responsible for storage and the nodes don't share a disk. In the database scaling community, this is a "shared nothing" architecture. To illustrate this a little further, the way most distributed database work in a shared nothing architecture is one of two ways, for each piece of data they either:

  • Hash the key and use that hash to lookup the node with the data
  • Use master nodes to maintain the node to data association

So, problem solved right? In theory, especially if I'm using a very fast/efficient hashing method, this should scale very well by simply adding more nodes at the appropriate layer.

The Problem

The problem has to do with foreign keys, ACID compliance, and the overhead they incur. Ironically, this overhead actually has a potentially serious negative impact on scaleability. Moreover, our reliance on this model and it's level abstraction, often blinds us to bottlenecks and leads to mysterious phantom slowdowns and inconsistent performance.

Let's first recap a couple of things (a more detailed background can be found here for those that care to read further.

  • A foreign key is a relation in one table to a key in another table the MUST exist for an update or insert to be successful (it's a little more complicated than that, but we'll keep it simple)
  • ACID compliance refers to a set of rules about what a transaction means, but in our context, it means that for update A, I must look up information B

Here's the rub, even with a perfectly partitioned shared nothing architecture, if we need to maintain ACID compliance with foreign keys, we run into a particular problem. If the Key for update A is on one node, and the Key for update B is on a different node... we require a lookup across nodes of the cluster. The only way to avoid this problem... is to drop the foreign key and/or relax your ACID compliance. It's true that perfect forward knowledge might allow us to design the data storage in such a way that this is not really a problem, but reality is otherwise.

So, at the end of the day, when folks are throwing their hats into the ring about how NoSQL is better than RDBMS, they're really saying they want to use databases that are either:

  • ACID compliant and they'll eschew foreign keys
  • Not ACID compliant

And I think we can see that, from a scaleability perspective, there are very good reasons to do this.

Friday, August 15, 2014

Things to remember about information security

As more businesses look to cloud application providers for solution, the need for developers to understand secure coding practices is becoming much more important. Gone are the days when a developer would write an application that only ran in a secure environment and now it is possible for applications to be moved to locations where previously well managed security gaps now are exposed to the internet at large. Developers now more than ever need to understand basic security principles and follow practices to keep their applications and data safe from attackers.

To make things more secure, a developer needs to first understand and believe the following statements:

  • You don't know how to do it properly
  • Nothing is completely secure
  • Obscurity doesn't equal security
  • Security is a continuum

You don't know how to do it properly

If I had a nickel for every developer who though they invented the newest, greatest, cleverest encryption/hashing routine, I'd be a millionaire. Trust me, if you aren't working for the NSA or doing a doctorate on the subject, there are thousands of people who can defeat your clever approach...worse yet, even if you ARE in the aforementioned groups there are still SOME folks who can defeat your approach. Which means:

Nothing is completely secure

The only way to completely secure a system or data is to completely destroy it. This is a mathematical fact, don't argue, just trust me on this. If ONE person can access the information, someone else can. MAYBE if it's in your head and your head alone it is pretty secure, but there are ways of getting that information too...some of which can be unpleasant. So these two things having been said, I want to add the clarifying statement that:

Obscurity doesn't equal security

As someone who has witnessed back doors get exploited numerous times, thinking you can just "hide the key under the rock" and hope for the best is not a sound policy. Don't get me wrong, making targets less obvious is great... please do it... but be wary of relying on this as your sole security measure, it will be discovered. Which leads to my final point:

Security is a continuum

Remember how security isn't absolute? Well this is the reassertion of that statement. When having discussions, the question isn't "is it secure (yes/no)?" it should be "is it secure enough (yes/no)?" and "what are our threat vectors?". Subtly changing the question from being absolutely yes or no can open up a discussion and let you objectively begin to measure your risk.

Monday, August 11, 2014

Avoid hibernate anemia and reduce code bloat

One of my beefs with Hibernate as an ORM is that it encourages anemic domain models that have no operations and are simply data structures. This coupled with java's verbosity tend to make code unmaintainable (when used by third party systems) as well as cause developers to focus in THINGS instead of ACTIONS. For example take the following class that represents a way to illustrate part of a flight booking at an airline:

public class Flight {
    public Date start;
    public Date finish;
    public long getDuration() {
        return finish.getTime() - start.getTime();
    }
}

This is the core "business" requirement for a use case in this model in terse java. Form an OO perspective, start and finish are attributes, and getDuration is an operation (that we happen to believe is mathematically derived from the first two fields. Of course, due to training and years of "best practices" brainwashing, most folks will immediately and mindlessly follow the java bean convention making all the member variables private and "just generate" the getters and setters. That makes the same functional unit above look like the following:

public class Flight {
    private Date start;

    public Date getStart() {
        return start;
    }

    public void setStart(Date start) {
        this.start = start;
    }

    private Date finish;

    public Date getFinish() {
        return finish;
    }

    public void setFinish(Date finish) {
        this.finish = finish;
    }


    public long getDuration() {
        return finish.getTime() - start.getTime();
    }
}

Wait, we're not done yet, if we want duration to be persisted, we'll move the logic to another class and add getters and setters:

public class Flight {
    public Date start;

    public Date getStart() {
        return start;
    }

    public void setStart(Date start) {
        this.start = start;
    }

    public Date getFinish() {
        return finish;
    }

    public void setFinish(Date finish) {
        this.finish = finish;
    }

    public Date finish;

    private long duration;

    public long getDuration() {
        return this.duration;
    }

    public void setDuration(long input) {
        this.duration = input;
    }
}

public class FlightHelper {
    public static long getDuration(long finish, long start) {
        return finish - start;
    }

}

This "Helper" or "Business Delegate" pattern is yet another area where things go wonky very quickly. Usually, to keep things "pure" folks will put all logic in the helper (or delegate, I'm not sure if there's a difference) and the model will have no logic. This really makes troubleshooting where the logic is contained very difficult. In addition, having a computed and stored field is fraught with potential for errors. Java folks will typically make the case that this class is really a Data Transfer Object (DTO)... OK, fine, but that's like saying an elephant is actually an herbivore...

But wait...it gets worse...

What I often see happen among java circles is that this is a death spiral of bloat in the interest of "best practices". A typical next step is that, folks invariably realize that serializing hibernate objects to remote servers or tiers that don't have access to hibernate becomes a huge challenge due to hibernate's technique of using AOP to actually replace the real object with a dynamic proxy. To get around this, developers invariably create another layer of DTOs or "Value Objects" as well a mapping layer to map between these two domains.

In conversation with most java developers about "why are we doing it this way?" I get blank stares and the best answer I've heard is "because that's the way we do it" or often a link to a web site explaining how to do it and why which ultimately is really just a clever way of saying "I don't know". Crafty individuals will then start talking about java patterns and all sorts of other artificial explanations that never explain "why", but simply re-endorse "how".

A way to mitigate this problem is to start decomposing application components functionally and realize that data persistence is in fact a first order operation in most systems. This means that persisting data should be atomic and a single step operations (hint: If you need a transaction manager the call is NOT atomic). Additionally, putting these behind web services means that the idea of persisting data becomes an internal responsibility and not something a caller needs to know or care about

Put another way, hide our persistance layer behind an API and don't create superfluous classes that need to be shared with third parties. So, in the example above, you could do something like:

public class FlightService {
    public Date getStart(long id) {
      //...implementation here...
    };
    //create a flight and return the identifier
    public long createFlight(Date start, Date finish) {
      //...implementation
    };
    ///Returns duration
    public long setStartAndFinish(long id, Date start, Date finish) {
      //..implementation here...
    };
    public Date getFinish(long id) {
      //...implementation here...
    };
    public long getDuration(long id) {
      ///...implementation here ...
    }
}

This preserves the idiomatic java, plus enables us to completely hide the implementation details from the caller. Yes, it introduces a transaction and granularity problem that we immediately need to solve... and should force us (unless we really want to do it the hard way) to start thinking about he API contract for atomic operations. I think this is the important distinction and shouldn't be forgotten. Worry about what your design is supposed to DO first as at the end of the day, the OPERATION is more important the the MODEL.