Wednesday, January 27, 2016

Process versus flexibility

I heard an interesting statement (just minutes ago) that really struck a chord with me. agreed, we need to have a balance between proceess and flexibility which both resonated with me, but also sent a shiver up (and down) my spine. A common thread I see is that folks who like "process" tend to be "inflexible" and people who espouse "flexibility" do so at the expense of "good process". I disagree violently and think this is a false dichotomy. At the end of the day, "a crappy process" or "an ad hoc process" both still a "process", and having either of those processes is no more or less flexible than having a vigorous and dynamic process that supports what you're business objectives.

The problem as I see it is that "process" has gotten a bad rap because many egghead know-nothing consultants have sold horribly convoluted and untenable processes to folks and when said customers see their effectiveness crash, they blame the "process". Having lived through this, I've seen many ad-hoc and not well though out processes be VERY effective and watched (multiple times) well meaning, but misguided academic "one size fit's all" processes wreak havoc on a system that, instead of being improved, was utterly destroyed by a new...but ineffective...process.

My stance is this: If your business model needs flexibility, you process should support flexibility in places that it makes sense...but it should discourage flexibility in places where it's working against your goals. In the "process" camp, many people fall victim to the idea that "I've designed the 'ultimate' process...if your business needs don't fit into it, you're a stupid head". On the flip side, the "process is evil" camp, there's the notion that "process is for morons, I have smart people, I can just change things at a moment's notice and they'll just do whatever they think is right and magically everything will work out". Obviously both of these are wrong (put this way), but at the end of the day, not using process oriented thinking will lead to conflict.

Listen, an "ad-hoc, do what you want" process is still a process. I would contend that an entire company set up where everybody just does what they feel like doing at that moment regardless of business objectives...will fail quickly...unless coincidentally everyone's goals happen to be aligned and they have perfect knowledge of what each and every other person is doing. Yet again on the flip-side...process for process sake is just dumb and generally only rewards management consultants and people who really want to move papers from point A to point B.

In conclusion, the important thing to remember is that your process should support things you want to happen, and discourage things you don't want to happen. If the risk/reward within the process is equal or upside down, you will spend time fighting the process instead of letting it support your objectives. Put another way, the goal is to make the process YOUR tool, instead of being just a tool abiding by the process.

Tuesday, January 26, 2016

Navigating The Internet of Things

Having worked a number of years with connected devices, I thought I'd like to briefly share some observations and pitfalls that folks just arriving to the field should heed.

The network isn't always there

Many folks arriving on the scene of connected devices come from a background where their applications ran in the datacenter and connectivity was a user's problem. That is, if someone tries to use your site, can't, then tries google and it doesn't work...they assume the problem is on THIER end. While this isn't universal, it's much more likely than someone who tries to turn off the lights in their house or start their car and it doesn't work. While wireless providers do a very good job of connecting when devices are reachable, there are so many variables that impact coverage for a wireless device, it is almost impossible in most cases for a user to make a determination what is casing their "thing" to not work. This is especially aggravated when the device is mobile...that is, a Nest connected via WIFI is generally "working" once it initially establishes a wireless connection but a connected vehicle moving at highway speeds will very often lose connectivity for periods of time (never mind parking in "dead" zones like underground parking garages or dead zones in rural areas). While a consumer device normally has an indicator of signal strength, many vehicle manufacturer still don't provide for this functionality.

Manage battery life

Designing a device to be "always on" with a high speed data connection is a recipe for designing a device that isn't really useful. Having just purchased a Samsung Gear S, the need to charge every day is a bit of a drag. Yes I could disable all connectivity and stretch to battery life to days, but's just a watch. Don't get me wrong, I love the connectedness of my gear, but it's been sitting dead on my desk for a week now because I keep forgetting to charge it. Power management is a "big deal", don't drain your user's battery necessarily and give them options to help manage the tradeoffs between "battery life" and "connectivity". For folks used to writing desktop and server applications, the idea of managing power is a completely new thing that embedded folks might understand...but surrendering to embedded notion of conserving every last milliamp hour at the expense of usability is not going to cut it.

Open wins

Too many folks in the IOT space are hoping to win market by using proprietary APIs and locking them down to keep competition at bay, this is a short sighted position. Apple realized this early on and Google forced the issue by creating an open platform. While Google itself may not directly be reaping the benefits of proprietary licensing schemes, they have succeeded in fragmenting the market and creating new opportunities for revenue that sticking with a proprietary network and stack would never allow (remember RIM?).

Monday, December 7, 2015

Lies, Damn Lies, and Virtualization

Having used virtualization in a variety of scenarios over the last 10-15 years, I still find some misconceptions about the value proposition and how to use it. At best these are just marketing misconceptions, but at worst they can lead to counterproductive activities that HURT your solution. So, not in any order, here three things I hear people say that are just normally not true. Yes I understand that there are some scenarios where they are, but for the most part in my experience these ideas have been taken too far and now create problems.

Virtualization helps me scale my solution

I hear this so much I really just get tired of reexplaining how this isn't true. While based on a grain of truth, in the general sense at the datacenter level, it's completely false. For a single application it is easier to add cores to a virtual machine than it is to buy new hardware...but...people forget that the hypervisor is running on real honest to goodness hardware and adding a new Virtual Machine on existing hardware actually REDUCES the capacity of the other virtual machines. Couple this with over provisioning and you could end up where 4 virtual cores giving you the capacity of a single core (or less). Worse yet, if you're just the "customer" you may need 4 virtual machines to get the capacity of a much smaller piece of hardware.

Hardware virtualization doesn't have any overhead

This is just silly, of course it does... Sure hardware is going to be much lower overhead than software, but scheduling Virtual machines on and off of shared CPU, Memory, Cache is overhead...again if over provisioning against underlying hardware is your "enterprise scaling strategy" be prepared for performance impact. Virtualization has overhead.

vMotions are undetectable and have no performance cost

I normally don't use profanity, but there's one word for this "Bullshit" (OK, maybe that's two words). For some reason, this lie propagates to the point (I blame vmware marketing for doing too good a job) that people have an almost religious belief in the "all holy magic" of Vmware;s ability to magically move state of a virtual machine with "zero impact on performance". Just stop believing, while it's transferring and maintaining state of a virtual machine from one piece of physical hardware to another...things slow down. End of story, don't believe me, drive a virtual machine to some nontrivial level (and measure the application performance...i.e. how LONG do my non trivial business transactions take?...don't forget that...many people do) and the motion it...if you're lucky it will only be a minute or two of "holy crap! what happened?"