Saturday, March 08, 2014

Location, location, location - Why I Joined BMC

The enterprise software market is not that different than the real estate market - where you are positioned in the market is everything.

In the nerdier-than-thou Bay Area, moving from VMware to BMC is not the most obvious move, so here are some of my thoughts on my decision.

At this point, I have started 2 companies (Persistence, Medaid), gone public once (PRSW - never again!), sold 3 companies (Persistence, WaveMaker, Reportive) and led one spinout (Pivotal).

Figuring out what to do next was a challenge.

I had always felt that in evaluating a job, team comes first and opportunity comes second (or in Jim Collins-speak, first who, then what).

When I was first introduced to BMC, I spoke to Eric Yau and was impressed by his vision about transforming BMC — I felt it was very similar to the transformation project I had worked on at VMware. As I met with other BMC executives, I was struck by the overall quality of the executives and their commitment to make BMC the leader in the cloud and automation management.

I believe that BMC has a unique position in the cloud space because they are not tied to a particular cloud platform. The other key players in the space - VMware, Amazon, Microsoft - all have a dog in the fight. They *care* which underlying platform their cloud automation manages.

In short, the other production-class cloud managers are focused on building a purebred cloud backed by their OS or hypervisor - only BMC has a singular focus on hybrid cloud.

If a key reason to move to cloud is greater customer choice, those same customers will be looking for the “Switzerland of cloud managers” to preserve their choice.

Time will tell, but so far I am thrilled with both the market opportunity in front of BMC and the collaborative culture within BMC.



Thursday, September 12, 2013

Engineering Management - Shaolin Style


A friend of mine just got a well-deserved promotion from code horse to manager. Here are my quick thoughts on making that transition.

The basic idea is that when you are given a little more responsibility, your words and actions carry more weight. For that reason, it is important to be careful about throwing that weight around.

You job is no longer to optimize your output, but to optimize the output of your group. Don't be the genius with a thousand helpers!

In particular, here is some advice to ease into a new engineering manager role:

  • Listen more. There is an expression about argumentative people - "they don't listen, they just reload." Since your words carry more weight, make sure you really understand other people's point of view before you offer your own. Once you wade in with guns blazing, other engineers will be less likely to confront you.
  • Code less. The tradeoff for more human communication is less computer communication. The time you spend helping make other people effective comes directly out of your average daily KLOC. Remember, you are making the team's total output better at the expense of your own output - this will smart a bit at first!
  • Start team building.
  • Stop architecting. If your vote counts for more than other engineers by dint of your hierarchical position, you can win architecture arguments just by yelling louder. To build a real engineering team, you have to separate the team leadership position from the tech leadership position. If you are the team leader, you just can't be the tech leader as well.

The net of it all is to use more influence, less telling; more carrot, less stick; you get the picture!

Monday, May 20, 2013

Health Care Transparency Requires Open Data


Transparent pricing and quality data is the foundation of the US economy, yet is entirely lacking in our Health Care industry. New players like Castlight have raised over $130 million to provide greater transparency, but only to selected customers who pay for that data.


I believe making health care pricing information freely available (like Wikipedia for health care data) will help reduce these inequities in our health care system. 

Last week's release of Medicare provider charge data from hospitals across the US pointed the way forward - making pricing data publicly available to everyone. Because the government pays in a unique way, this data is only a starting point - what is needed is a public data set showing what employers and individuals pay for these same services.

Several years ago, I had a personal experience that ignited a passion to drive change in US healthcare. While our family was living in Paris, my son was diagnosed with a benign brain tumor. We went through a series of medical procedures in France and then repeated them on our return to San Francisco.

Because our insurance only covered major medical procedures, we had to pay these bills personally. We found that medical costs for in the US averaged a factor of seven to ten times higher than what we had paid in Paris.

A good first step would be to analyze claims data from 3-5 large US employers to create a dataset showing the prices eployers paid for the most common procedures across providers (including the top 100 most frequently billed discharges information published by  Medicare). This analysis would help employers verify the health care prices they are paying.

Making this information available on a publicly available web site could unlock a wave of innovation in the world of health care, much as open source communities have transformed the software world.


Monday, March 18, 2013

Hadoop Will Not Mow Your Lawn


"The best minds of my generation are thinking about how to make people click ads." Jeff Hammerbacher ex- Facebook Architect

It turns out that when you have a lot of "best minds" working on the same problem, you come up with some pretty interesting technology - no matter how inane that problem may be.

The technology that those "best minds" at Yahoo came up with to target ads to users is called Hadoop. 

Hadoop is a powerful technology and like most new IT solutions is being touted at being able to solve a vast number of technical ills. When companies discover that Hadoop will not in fact cure male pattern balding, they will fall into the inevitable trough of disillusionment

Here are some thoughts about what Hadoop can and cannot do:

1. RDBS are for business data, Hadoop is for web data

Almost all traditional business data fits well into the relational model, including data about customers (CRM), products (ERP) and employees (HR). This data should continue to live in relational databases, where it is much easier to manage and access than in Hadoop.

Almost all web data fits well into the Hadoop model, including log files, email and social media. This data would be almost impossible to store in a relational database, not just because of the volume, but because of the inherently nested quality of the data (threaded email conversations, web site directory structures, social media graphs).

2. Hadoop is really good at analyzing web data

Hadoop is incredibly good at looking at huge amounts of web data and figuring out why people clicked on the blue button instead of the red one. This can be generated to a few other computer log formats, but the list is relatively small, including:
How many other data types look like click streams? Not very many. How many other real world problems lend themselves to analysis using web data analytic techniques? Also not as many as you might think.

This is not to take anything from the Hadoop market opportunity - as more of the world interacts with each other via web applications and devices, more of the world's data will be reducible to click-stream-like formats. 

The big data craze has taken over the tech media world much like the cloud craze. Most people know it is important but they don't know why. Many vendors get caught up in the hype cycle and start to believe that their technology has some sort of manifest destiny that will allow it to do much more than it can reasonably be expected to do.

3. Hadoop is a Pay Me Later Technology

Traditional data warehouses work on a "pay me now" basis. To get data into the data warehouse - even data that may not end up being useful in any way - you have to massage the data into a formal relational model. This is expensive and the data normalization process itself may make it impossible to get at the data in exactly the way you want to.

In contrast, Hadoop works on a "pay me later" basis. Data can be shoved into the Hadoop file system any old way. It is not until someone wants to analyze the data that you have to worry about how to connect all the pieces. The gotcha is that the price you pay in this "pay me later" model is much higher, requiring extensive programming in order to ask each question. 

In addition, because the normalization process wasn't done up front, it won't be until later that you may discover that you were missing crucial pieces of information all along. Thus it does bear some thinking up front on what sort of data to store in your Hadoop database and what kinds of questions you might want to be able to answer about that data in the future.  

Realistically, it will take most businesses who implement several years to figure out whether all the data they are dumping into Hadoop produces real value out the back end, just as it was several years before companies started to get a payout from their investments in relational data warehouses.

4. Use the right tool for the right job

Back in my - very brief - high school shop days, we learned that the trick to making a really nice looking ash tray is picking the right tool for the right job.
  • Hadoop is web data query engine that requires a high level of effort for each new query. 
  • Relational is a business data query engine that requires a high level of effort to format and load data into the datastore.
The fastest way for companies to get into trouble with Hadoop is to try to use it as a one-size-fits-all data warehouse. Much of the news in the Hadoop world today has to do with SQL parsers that run on top of Hadoop data. This is a powerful and valuable technology, but does not mean that you can throw out your data warehouse and replace it with Hadoop just yet.



Tuesday, February 05, 2013

What I'm Talking About When I'm Talking About PaaS


I recently got some feedback on my previous musing that from the customer viewpoint, PaaS equals automation. That led me to think of ways to articulate better what this means both to customers and vendors.

Customers are basically indifferent to PaaS. This can be seen in the very modest market for PaaS as opposed to all the other aaS-es. Where is the PaaS that is producing anywhere near the value of the biggest SalesForce's $2.3B in SaaS revenues or Amazon's ~$1B in IaaS revenues?

Customers are indicating - in the only way that matters - that they value they perceive from PaaS is orders of magnitude lower that the value of other cloud offerings.

Are customers right to be so indifferent about PaaS? In a word, yes.

Vendors have not done a good job of explaining the value of PaaS beyond singing paeans to productivity that comes from being able to deploy a complete application without having to configure the platform services for that application.

The NIST definition of PaaS defines it as "the capability to deploy applications onto the cloud without requiring the consumer to manage the underlying cloud infrastructure." (note: paraphrasing here as the NIST folks don't seem to write in English)

Here's the problem with that definition: it mirrors exactly how 99% of Enterprise developers already work! In the enterprise, the functional equivalent of PaaS is IT. Once an enterprise developer is done with their app, they throw it over the wall to dev ops/app ops folks who magically push it through the production cycle.

For most developers, the value proposition articulated by PaaS vendors just doesn't seem all that different from what they can get from internal IT or external IaaS.


  • IaaS allows me to rent a data center with a credit card and zero delay versus going through a six month IT acquisition cycle - eureka!
  • SaaS allows me to deploy whole new business capabilities without a two-year funding and development cycle - hallelujah!
  • PaaS has a lot more to offer than just productivity, but so far, that is all customers understand about it - so they let out a collective yawn.


Until PaaS vendors find ways to connect their platform to solving critical IT and business problems, PaaS will remain an under-perfoming member of the cloud family.

Friday, November 30, 2012

Big Data And The Open Source Model - Can This Marriage Be Saved?


It is amazing how many open source software companies out there are trying to get hit by the same $1B bolt of lightning that hit MySQL without realizing that the MySQL result is not repeatable.

Looking at the current batch of big data high flyers, from TenGen to Cloudera to Horton Works, each seems to be vying for the same kind of ubiquitous usage that enabled MySQL to get a more than 20x multiple. What they don't realize is that the failure of early open source acquisitions to deliver substantial value to owners has made buyers much more wary.

Companies like MySQL were valued based on a mystical belief that downloads could be monitized (not unlike the similarly wishful belief in monetizing eyeballs that motivated disastrous dot com acquisitions in the 90s). Moving forward, open source companies will be valued the old-fashioned way: by the viability of their business model.

Here are the top three places most big data open source companies are missing the boat:

  1. Prioritizing business model behind buzz: although buzz is critical for adoption growth, a viable business model trumps all in positioning a company for IPO or acquisition. First and foremost, this means being able to charge significant prices for add-on product pieces that customers want, such as security, clustering and monitoring.
  2. Confusing services with sales: low margin services revenues are no substitute for high quality license revenues. More importantly, companies that build up large services teams often neglect to fully integrate their product, as product integration provides a driver for services engagement. This lack of product maturity in turn prevents customers from being willing to pay much for the product itself - a classic vicious cycle.
  3. Hoping for a desperate buyer: companies that purchased open source players have by and large to translate open source leadership into commercial market share. The open source downloads generate lots of buzz but little license revenue, saddling their owners with an expensive, services-led business. In the immortal words of Mitt Romney, hope is not a strategy (although it *did* turn out to be an ok strategy for the incumbent in that case).


Thursday, November 15, 2012

The Genius, The Conductor and The Bureaucrat


No, this is not a joke about three guys walking into a bar but the result of some recent musing about how the art of management is practiced in Silicon Valley.

The classic Silicon Valley stories often feature what Jim Collins calls "the genius with a thousand helpers" (from his book Good to Great). Steve Jobs, Larry Ellison and many other valley icons were known for their vice-like control over all aspects of their business.

When that Genius individual really is the smartest person in the world, you get the iPhone. When they are not, you get Palm's WebOs. Working for a boss who always has to be the smartest man in the room is a humbling experience but at least you know where you stand - at the bottom.

The contrast to the Genius is the conductor, a person who - without playing an instrument themselves - is judged purely on their ability to draw great performances out of others. This is the idealized, servant CEO that is touted in all of the business school texts but seen much less frequently in the wild.

Examples of the Conductor style of leadership would include people like Paul Maritz of VMware. In my experience, there is nothing in the work world that beats the thrill of working with a committed team on big, hairy, audacious goals where the person leading the charge is focused purely on helping the team win.

The third category is the Bureaucrat. The thing to remember about Bureaucrats is that what they are best at producing is more Bureaucrats. These are people who are always overwhelmed with work but never make decisions that would offload that work. In a way, they follow the same model as the Genius, in that all decisions have to come through them.

The goal for all CEOs should be to aspire to play the Conductor role, while realizing that it is human nature to slip into Genius and Bureaucrat now and again.

Monday, October 29, 2012

PaaS *Is* Automation


Cloud computing has a challenge endemic to many Silicon valley advances - a great technology triumph somewhat disconnected from a clear business benefit.

The developer version of cloud computing is PaaS (Platform as a Service). Like cloud computing in general, PaaS has struggled to articulate clearly why it deserves to capture the hearts and minds of enterprise developers

Hipper web developers have had no such hesitations and have collectively leapt to cloud computing platforms like Heroku and Cloud Foundry for Ruby on Rails.

This difference in adoption tells an important story. By and large, enterprise developers have spent years building highly automated toolchains centered around tools like Eclipse and Clearcase and targeting both desktop and web clients.

In contrast, platforms like Ruby on Rails were designed for web deployment and web tooling, so fit the online/PaaS model more naturally.

I would argue that from an enterprise developer's point of view, PaaS is just about automation. When a PaaS appears with associated tooling that makes it easier for enterprise developers to do their work in the cloud than on their desktop, we will see a big spike in adoption. At a minimum, this will require the following:

1. Seamless integration with existing tools: developers will want an easy on-ramp that doesn't require them to abandon existing tools like Eclipse, Clearcase
2. Automated build and test: this is where an integrated cloud tool chain can really rock, but only if it is easier to use than existing internal solutions.
3. Opinionated client stack: one of the biggest things holding Java back for full web development is that Java is not opinionated about how to build a client stack in the way that for example Ruby on Rails is. This means every development team has to come up with its own scaffolding and build process, making it difficult to deliver an automation solution that satisfies.

Of these three, the third is the real show stopper. More on this later.

ps. I thought briefly about coining a new acronym, PaaS *Is* Automation Stupid, but decided that we have enough acronyms in this space.

Thursday, August 23, 2012

Evolutionary and Revolutionary Clouds




Now that we are a couple of years into the great cloud journey is it pretty clear that the big bang theory of cloud conversion is ain't happening.

Yes, ISVs are moving rapidly to the SaaS model and it would be hard to find a software startup who is *not* starting in the cloud, but enterprise adoption of the public cloud is happening at a more stately pace.

In large part this is due to the simplification required to make public clouds efficient and the complexity that characterizes most enterprise IT environments.  To put it differently, the public cloud makes app deployment simple by pruning app deployment options to the point that few enterprise applications can fit.

Moving forward, I see two paths for cloud adoption: evolutionary and revolutionary.

  • Revolutionary cloud: Public clouds like Amazon EC2 and CloudFoundry.com represents a revolutionary leap forward for companies that are willing/able to abandon their current platforms. The revolutionary cloud offers a high degree of operational productivity at the expense of service choice (e.g., you can have any color you want as long as its black).
  • Evolutionary cloud: public/private clouds like VMware's vCloud Director enable enterprises to get cloud benefits (public/private deployment, low upfront cost, elastic scaling, self-healing) without having to make major changes to their application architecture. The evolutionary cloud offers a lower level of productivity with a greater range of choice (e.g., you trade of productivity for flexibility).

Over time, the revolutionary cloud will offer more choice and flexibility while the evolutionary cloud will offer higher automation. Some questions for enterprise developers to answer as they move along this path include:

  1. How much control do I have over the deployed application environment? The more flexible the deployment environment, the easier it is to move that application to the public cloud.
  2. How do I move applications between different clouds? Having a way to move applications between evolutionary and revolutionary cloud architectures is just as important as being able to move apps between different flavors of public clouds

Thursday, June 07, 2012

Building Killer Apps with Big Data


One thing that gets lost in the general Big Data hubbub is the critical question of apps. Big Data can provide stunning business insights but unless those insights are embodied into an application that can galvanize new business behaviors they are not worth much.

VMware has been a thought leader in the area of cloud application platforms for some time. Now we are turning our attention to the intersection of Big Data and Cloud Computing.

What does it take to build applications that can move easily between private and public cloud while accessing data inside and outside of the firewall?

In particular, what are the best practices for building cloud applications that leverage big data? Here are some of our initial thoughts:


  1. Lightweight services: REST is the new SOA - lightweight services form the basis for supporting web front ends while pub/sub messaging like RabbitMQ forms the basis for back end workflow and transactions.
  2. Mobile-first UI: the twitter bootstrap library finally enables developers to build HTML5 apps for Mobile devices that scale beautifully to tablet and browser-based desktops.
  3. Fast data: scaling the front end of the application often requires in-memory data management. The easiest way to interact with core data is through a SQL interface such as SQLFire.
  4. Big Data: knowing what is going on right now takes fast data, knowing what to do about it takes access to large amounts of historical data. The key here is to provide integration between the two data sources so that the data warehouse is kept as up to date as possible with the in-memory database.
  5. Application-level management: managing application performance as a series of logical tiers rather than physical instances eliminates a great deal of complexity for systems admins.
  6. Cloud deployment: automated, dev/ops solutions like Application Director take the black magic out of large scale systems deployment, collapsing a multi-day deployment sequence into a few minutes of scripted wonder. 
  7. Elastic scaling: a core value of cloud computing environment like Cloud Foundry is sizing the compute resources to the task at hand - when demand is high, the resources scale up and vice versa.
  8. Self healing: cloud means never having to say you're sorry that your web site went down because a component croaked and couldn't restart - again, Cloud Foundry comes to the rescue.
I will be discussing how to build killer apps for big data at the GigaOm Structure conference at the end of this month along with Tom Roloff, COO, EMC Consulting.