Wednesday, August 26, 2015

People Only Buy To Get Promoted - The Key To Enterprise Sales

I have been fortunate to have many good sales mentors in my career but the best hands down was Joe Roebuck. Joe headed sales at Sun Microsystems for 17 years and was on my board at Persistence Software for 5 years.

Joe also gave me the most important insight about how to sell enterprise software:

People only buy to get promoted.

The enterprise software version of this pithy statement would be something like: enterprise buyers will only buy your shiny object if they see it leading directly to recognition, acclaim and promotion or at least a raise.

There is a lifetime of sales knowledge encapsulated in that quote. Here is how I interpret it:

  • Status quo is easy: enterprise software is a business in which innovative upstarts try to unseat incumbents. The easy purchase decision in enterprise software is always to go with the incumbent. 
  • Shiny objects are risky: Enterprise buyers always have a choice between safe status quo vendors and an array of risky but alluring new vendors
  • Career advancement is why buyers take risks: if a buyer does not get a personal benefit - attention, a raise, a promotion - the risk quite literally does not outweigh the reward
  • Advancing customer careers is how companies win: most sales people think only through the customer signature and maybe the initial implementation. Making a customer successful is a longer-term venture and extends at least to the buyer’s next HR review cycle.

There is no more passionate evangelist than a successful buyer and it only takes a few really happy buyers for the market herd instinct to kick in.  For example, VMworld pulls in 10,000 attendees a year, all of whom believe that VMware products are advancing their career.

Buyers know that product features don't guarantee success. Just because a product is objectively better doesn’t mean it will be successfully implemented, integrated and maintained by the vendor.  A key success in sales it to structure a deal in such a way that the company has incentive to stay focused on the success of the deal over time.

It is interesting that people always say of incumbents like IBM, “nobody gets fired for buying IBM.” The flip side of that is the only reason a buyer would make a riskier choice would be for the opportunity to be promoted, aka the opposite of being fired.

Wednesday, August 19, 2015

When Will Cloud Come to PaaS?

One of the perennial cloud predictions has been that 200x would be the year of the Platform as a Service (PaaS) cloud. The logic goes that if an automated data center in the sky is good, an automated development platform in the sky must be even better.

“Normal” clouds like Amazon AWS give the developer a virtual computer to load their OS and App onto. PaaS gives the developer a virtual computer with the OS, database and middleware “pre-loaded,” thereby simplifying the deployment.

Yet so far, PaaS adoption has been anemic and Gartner puts PaaS at 1% of the overall cloud market. At the same time, new technologies like Docker and containers have attracted far more attention from the developer community.

PaaS Lacks “Write Once, Run Anywhere” Simplicity

Developers love the simplicity of “write once, run anywhere.” This is what gave Java its initial allure and it is at the core of Docker’s recent ascendance to the top of the shiny tech object heap. PaaS has traditionally been more of a “write differently for each place” kind of solution.  Issues include:
  1. PaaS lock-in – there is no example in the industry of PaaS portability – each PaaS has its own unique services and configuration. While IaaS also suffers from similar lock in issues, the effort required to port from one cloud to another is much lower here.  
  2. Anemic ecosystem - real applications use many different services, such as database, file storage, security and messaging. In order to deploy an application in a PaaS, the PaaS must support every service that app needs,.
  3. Public/private inflexibility – many PaaS offerings are cloud only (Heroku) or on premise (OpenShift). Even for PaaS offerings that can run on or off premise, replicating the exact service ecosystem in each environment is challenging.

PaaS For SaaS Is a Winner

A no-brainer use of PaaS is to extend existing SaaS applications. In this case, the write once run anywhere problem goes away because there is only one place to build and run the application. 

The big winner in PaaS to date has been SalesForce. Their Force.com platform makes it easy for companies to extend their CRM applications or build entirely new applications. With this platform, SalesForce has created huge competitive differentiation in CRM space while also building a PaaS revenue stream approaching $1B a year, dwarfing any other PaaS offering. 

Cloud Native PaaS Could Go Mainstream

Google recently released their cloud native platform, called Kubernetes (which means pilot in Greek). Kubernetes is a cloud operating system for containers that runs anywhere. A number of PaaS vendors are banding together to define the requirements for cloud native computing.

The promise is to simplify still further the process of provisioning services to cloud containers, regardless of where they are running. It will be exciting to see how existing PaaS vendors like CloudFoundry incorporate these new technologies into their offerings.



Monday, August 10, 2015

Enterprises Need A Panic Button for Security Breaches

Most home security systems have a panic button - if you hear something go bump in the night you can push a panic button to starts the sirens wailing, call the cops and hopefully sends the bad guys scurrying. As useful as this is for home owners, enterprises need a security panic button even more.

Security spending is heavily weighted towards keeping bad guys out. Media coverage has demonstrated how often they get in anyway. According to the CyberEdge Group, 71% of large enterprises reported at least 1 successful hacking attack in 2014.

While there is extensive advice around the manual steps to take to respond to a malicious attack, there is little in the way of an automated response to an attack. This is important area to extend enterprise automation.

What might a Panic Button for automated response to security incidents look like? Essentially this would be an automated workflow that would implement a set of tasks to eliminate the current attack, identify existing losses and minimize future damage. An example workflow could include:

  1. Identify compromised systems from intrusion detection tools and disconnect compromised systems from network
  2. Search for unauthorized processes or applications currently running or set to run on startup and remediate
  3. Run file integrity checks and restore files to last known good state
  4. Examine authentication system for unauthorized entries/changes and role back suspect changes 
  5. Make backup copies of breached systems for forensic analysis
  6. Identify information stolen from OS and database logs

By creating automated “Panic Button” workflows that respond to security incidents, enterprises can reduce the damage of an attack. This automated approach can also show customers that an enterprise is taking full precautions to protect their personal information from falling into the wrong hands.

Wednesday, May 13, 2015

Entrepreneurial Management – The Loose-Tight Loop

For the last 20 years, I have been leading teams both small (2 partners and a turtle) and large (over 850 employees). During that time I have had big successes (IPO on Nasdaq, sale to VMware) and crushing failures (remember the Y2K bubble?) Sitting on numerous boards also gave me a ring-side seat to observe different management styles.

Through this experience I have evolved a management style to drive rapid business transformation and growth. I call this style the “loose-tight loop (a mash-up of ideas from the Tom Peters book “In Search of Excellence” and OODA loops). 

In the very dynamic startup world, it is often hard to strike the right balance between “if I do it myself I know it will get done right” and letting chaos rule. Because the market is evolving at the same time as the company, assumptions about customers, competitors and technology change rapidly as well.

I see the job of the CEO as aligning the team on a set of audacious goals and orchestrating the achievement of those goals through three activities:
  • Tight on what to do – align the team on goals and priorities
  • Loose on how to do it – trust the team to reach those goals efficiently and creatively
  • Loop to learn – communicate regularly to learn what is working and not working (aka trust but verify)

Over time, I have adopted a number of agile process ideas to put the loose-tight loop into practice:
  • Daily standup – 15 min call to communicate actions and identify issues 
  • Weekly top 5s – on Monday, each exec lists their 5 priorities for that week , summarizes status for the top 5 priorities for last week and updates MBOs
  • Weekly check in – 1 hr one on one meeting to collaborate and coach
  • 6 week sprint – 2 hr meeting to go deep on 1-2 issues, review MBOs for last sprint and & set MBOs for next spring
  • Annual plan – 2 day planning session to rebuild business plan for next year


Management By Objectives (MBOs) are critical as they are the explicit link between team objectives and executive priorities. Linking MBOs too closely to compensation can reduce their value. MBOs should represent challenging tasks – 100% achievement is not expected and is likely a sign that the goals were too easy. These MBOs become calls to action for the team to support each other in accomplishing tough tasks.  

In the loose-tight loop, the CEOs job is to get everyone onto the same map and working together to reach the same destination. The executives’ job is to execute in alignment with the plan and ask for help if it turns out our assumptions are wrong.

In fact, the biggest risk execution risk is that execs are too slow in asking for help when they run into trouble.  More experienced execs have the confidence to ask for help when they need it. Less experienced execs try to bluff their way through the problem. This is dangerous to the whole team because often execution challenges mask underlying mistaken assumptions.

Saturday, March 08, 2014

Location, location, location - Why I Joined BMC

The enterprise software market is not that different than the real estate market - where you are positioned in the market is everything.

In the nerdier-than-thou Bay Area, moving from VMware to BMC is not the most obvious move, so here are some of my thoughts on my decision.

At this point, I have started 2 companies (Persistence, Medaid), gone public once (PRSW - never again!), sold 3 companies (Persistence, WaveMaker, Reportive) and led one spinout (Pivotal).

Figuring out what to do next was a challenge.

I had always felt that in evaluating a job, team comes first and opportunity comes second (or in Jim Collins-speak, first who, then what).

When I was first introduced to BMC, I spoke to Eric Yau and was impressed by his vision about transforming BMC — I felt it was very similar to the transformation project I had worked on at VMware. As I met with other BMC executives, I was struck by the overall quality of the executives and their commitment to make BMC the leader in the cloud and automation management.

I believe that BMC has a unique position in the cloud space because they are not tied to a particular cloud platform. The other key players in the space - VMware, Amazon, Microsoft - all have a dog in the fight. They *care* which underlying platform their cloud automation manages.

In short, the other production-class cloud managers are focused on building a purebred cloud backed by their OS or hypervisor - only BMC has a singular focus on hybrid cloud.

If a key reason to move to cloud is greater customer choice, those same customers will be looking for the “Switzerland of cloud managers” to preserve their choice.

Time will tell, but so far I am thrilled with both the market opportunity in front of BMC and the collaborative culture within BMC.



Thursday, September 12, 2013

Engineering Management - Shaolin Style


A friend of mine just got a well-deserved promotion from code horse to manager. Here are my quick thoughts on making that transition.

The basic idea is that when you are given a little more responsibility, your words and actions carry more weight. For that reason, it is important to be careful about throwing that weight around.

You job is no longer to optimize your output, but to optimize the output of your group. Don't be the genius with a thousand helpers!

In particular, here is some advice to ease into a new engineering manager role:

  • Listen more. There is an expression about argumentative people - "they don't listen, they just reload." Since your words carry more weight, make sure you really understand other people's point of view before you offer your own. Once you wade in with guns blazing, other engineers will be less likely to confront you.
  • Code less. The tradeoff for more human communication is less computer communication. The time you spend helping make other people effective comes directly out of your average daily KLOC. Remember, you are making the team's total output better at the expense of your own output - this will smart a bit at first!
  • Start team building.
  • Stop architecting. If your vote counts for more than other engineers by dint of your hierarchical position, you can win architecture arguments just by yelling louder. To build a real engineering team, you have to separate the team leadership position from the tech leadership position. If you are the team leader, you just can't be the tech leader as well.

The net of it all is to use more influence, less telling; more carrot, less stick; you get the picture!

Monday, May 20, 2013

Health Care Transparency Requires Open Data


Transparent pricing and quality data is the foundation of the US economy, yet is entirely lacking in our Health Care industry. New players like Castlight have raised over $130 million to provide greater transparency, but only to selected customers who pay for that data.


I believe making health care pricing information freely available (like Wikipedia for health care data) will help reduce these inequities in our health care system. 

Last week's release of Medicare provider charge data from hospitals across the US pointed the way forward - making pricing data publicly available to everyone. Because the government pays in a unique way, this data is only a starting point - what is needed is a public data set showing what employers and individuals pay for these same services.

Several years ago, I had a personal experience that ignited a passion to drive change in US healthcare. While our family was living in Paris, my son was diagnosed with a benign brain tumor. We went through a series of medical procedures in France and then repeated them on our return to San Francisco.

Because our insurance only covered major medical procedures, we had to pay these bills personally. We found that medical costs for in the US averaged a factor of seven to ten times higher than what we had paid in Paris.

A good first step would be to analyze claims data from 3-5 large US employers to create a dataset showing the prices eployers paid for the most common procedures across providers (including the top 100 most frequently billed discharges information published by  Medicare). This analysis would help employers verify the health care prices they are paying.

Making this information available on a publicly available web site could unlock a wave of innovation in the world of health care, much as open source communities have transformed the software world.


Monday, March 18, 2013

Hadoop Will Not Mow Your Lawn


"The best minds of my generation are thinking about how to make people click ads." Jeff Hammerbacher ex- Facebook Architect

It turns out that when you have a lot of "best minds" working on the same problem, you come up with some pretty interesting technology - no matter how inane that problem may be.

The technology that those "best minds" at Yahoo came up with to target ads to users is called Hadoop. 

Hadoop is a powerful technology and like most new IT solutions is being touted at being able to solve a vast number of technical ills. When companies discover that Hadoop will not in fact cure male pattern balding, they will fall into the inevitable trough of disillusionment

Here are some thoughts about what Hadoop can and cannot do:

1. RDBS are for business data, Hadoop is for web data

Almost all traditional business data fits well into the relational model, including data about customers (CRM), products (ERP) and employees (HR). This data should continue to live in relational databases, where it is much easier to manage and access than in Hadoop.

Almost all web data fits well into the Hadoop model, including log files, email and social media. This data would be almost impossible to store in a relational database, not just because of the volume, but because of the inherently nested quality of the data (threaded email conversations, web site directory structures, social media graphs).

2. Hadoop is really good at analyzing web data

Hadoop is incredibly good at looking at huge amounts of web data and figuring out why people clicked on the blue button instead of the red one. This can be generated to a few other computer log formats, but the list is relatively small, including:
How many other data types look like click streams? Not very many. How many other real world problems lend themselves to analysis using web data analytic techniques? Also not as many as you might think.

This is not to take anything from the Hadoop market opportunity - as more of the world interacts with each other via web applications and devices, more of the world's data will be reducible to click-stream-like formats. 

The big data craze has taken over the tech media world much like the cloud craze. Most people know it is important but they don't know why. Many vendors get caught up in the hype cycle and start to believe that their technology has some sort of manifest destiny that will allow it to do much more than it can reasonably be expected to do.

3. Hadoop is a Pay Me Later Technology

Traditional data warehouses work on a "pay me now" basis. To get data into the data warehouse - even data that may not end up being useful in any way - you have to massage the data into a formal relational model. This is expensive and the data normalization process itself may make it impossible to get at the data in exactly the way you want to.

In contrast, Hadoop works on a "pay me later" basis. Data can be shoved into the Hadoop file system any old way. It is not until someone wants to analyze the data that you have to worry about how to connect all the pieces. The gotcha is that the price you pay in this "pay me later" model is much higher, requiring extensive programming in order to ask each question. 

In addition, because the normalization process wasn't done up front, it won't be until later that you may discover that you were missing crucial pieces of information all along. Thus it does bear some thinking up front on what sort of data to store in your Hadoop database and what kinds of questions you might want to be able to answer about that data in the future.  

Realistically, it will take most businesses who implement several years to figure out whether all the data they are dumping into Hadoop produces real value out the back end, just as it was several years before companies started to get a payout from their investments in relational data warehouses.

4. Use the right tool for the right job

Back in my - very brief - high school shop days, we learned that the trick to making a really nice looking ash tray is picking the right tool for the right job.
  • Hadoop is web data query engine that requires a high level of effort for each new query. 
  • Relational is a business data query engine that requires a high level of effort to format and load data into the datastore.
The fastest way for companies to get into trouble with Hadoop is to try to use it as a one-size-fits-all data warehouse. Much of the news in the Hadoop world today has to do with SQL parsers that run on top of Hadoop data. This is a powerful and valuable technology, but does not mean that you can throw out your data warehouse and replace it with Hadoop just yet.



Tuesday, February 05, 2013

What I'm Talking About When I'm Talking About PaaS


I recently got some feedback on my previous musing that from the customer viewpoint, PaaS equals automation. That led me to think of ways to articulate better what this means both to customers and vendors.

Customers are basically indifferent to PaaS. This can be seen in the very modest market for PaaS as opposed to all the other aaS-es. Where is the PaaS that is producing anywhere near the value of the biggest SalesForce's $2.3B in SaaS revenues or Amazon's ~$1B in IaaS revenues?

Customers are indicating - in the only way that matters - that they value they perceive from PaaS is orders of magnitude lower that the value of other cloud offerings.

Are customers right to be so indifferent about PaaS? In a word, yes.

Vendors have not done a good job of explaining the value of PaaS beyond singing paeans to productivity that comes from being able to deploy a complete application without having to configure the platform services for that application.

The NIST definition of PaaS defines it as "the capability to deploy applications onto the cloud without requiring the consumer to manage the underlying cloud infrastructure." (note: paraphrasing here as the NIST folks don't seem to write in English)

Here's the problem with that definition: it mirrors exactly how 99% of Enterprise developers already work! In the enterprise, the functional equivalent of PaaS is IT. Once an enterprise developer is done with their app, they throw it over the wall to dev ops/app ops folks who magically push it through the production cycle.

For most developers, the value proposition articulated by PaaS vendors just doesn't seem all that different from what they can get from internal IT or external IaaS.


  • IaaS allows me to rent a data center with a credit card and zero delay versus going through a six month IT acquisition cycle - eureka!
  • SaaS allows me to deploy whole new business capabilities without a two-year funding and development cycle - hallelujah!
  • PaaS has a lot more to offer than just productivity, but so far, that is all customers understand about it - so they let out a collective yawn.


Until PaaS vendors find ways to connect their platform to solving critical IT and business problems, PaaS will remain an under-perfoming member of the cloud family.

Friday, November 30, 2012

Big Data And The Open Source Model - Can This Marriage Be Saved?


It is amazing how many open source software companies out there are trying to get hit by the same $1B bolt of lightning that hit MySQL without realizing that the MySQL result is not repeatable.

Looking at the current batch of big data high flyers, from TenGen to Cloudera to Horton Works, each seems to be vying for the same kind of ubiquitous usage that enabled MySQL to get a more than 20x multiple. What they don't realize is that the failure of early open source acquisitions to deliver substantial value to owners has made buyers much more wary.

Companies like MySQL were valued based on a mystical belief that downloads could be monitized (not unlike the similarly wishful belief in monetizing eyeballs that motivated disastrous dot com acquisitions in the 90s). Moving forward, open source companies will be valued the old-fashioned way: by the viability of their business model.

Here are the top three places most big data open source companies are missing the boat:

  1. Prioritizing business model behind buzz: although buzz is critical for adoption growth, a viable business model trumps all in positioning a company for IPO or acquisition. First and foremost, this means being able to charge significant prices for add-on product pieces that customers want, such as security, clustering and monitoring.
  2. Confusing services with sales: low margin services revenues are no substitute for high quality license revenues. More importantly, companies that build up large services teams often neglect to fully integrate their product, as product integration provides a driver for services engagement. This lack of product maturity in turn prevents customers from being willing to pay much for the product itself - a classic vicious cycle.
  3. Hoping for a desperate buyer: companies that purchased open source players have by and large to translate open source leadership into commercial market share. The open source downloads generate lots of buzz but little license revenue, saddling their owners with an expensive, services-led business. In the immortal words of Mitt Romney, hope is not a strategy (although it *did* turn out to be an ok strategy for the incumbent in that case).