Blog Series: ASC 606 - What it Means for Software Companies

shutterstock_348454145.jpg

A new FASB revenue recognition rule, ASC 606 (ASU 2014-09), and IFRS rule, IFRS 15 places renewed pressure on your accounting team and processes. Lack of readiness risks a multitude of issues, from reduced accounting productivity, to unforeseen revenue impact, to risk of restatement. These rules impact revenue recognition on a broad range of contractual agreements with customers. They have a far reaching impact, for example, companies with bundled products and services, variable discounts, different payment and renewal terms, sales commissions, royalties, or other specific contract commitments should be especially cognizant of the new rules.

For public companies, adoption is effectively 2018, while for private companies, it’s 2019. If you’re a private company, you may think you’ve still got plenty of time. But the reality is that contracts you are writing now will likely be impacted by the new rules, so it’s essential to make sure revenue recognition processes are running smoothly when the time comes. And in the meantime, you need to understand what the potentially significant impact will be on revenue, and compare old versus new, sooner rather than later.

Accounting systems need to separate and time revenue and related expenses accordingly — and ideally automatically, while also helping finance and accounting understand the impact on the income statement. If these systems can’t, accounting teams will get dragged down into the weeds into the contract ― and spreadsheet - detail, wasting time and creating significant compliance risk.

The standards will likely affect entities’ financial statements, business processes and internal control over financial reporting. While some entities will be able to implement the new standards with limited effort, others may find implementation to be a significant undertaking. Successful implementation will require an assessment and a plan for managing the change.
— Ernst & Young

These rules are particularly relevant for subscription based software companies. Companies who have extensive contract negotiation cycles, will want to make sure everything is documented – from sales, to service, to finance, because contract items like discounts, payment and renewal terms, multiple inter-related contracts, activation fees, even sales commissions can make a profound difference to how revenue and expenses look under the new rules. In a nutshell, accounting systems need to be more connected, more intelligent, more automated than ever before.

Let’s start with timing. With ASC 606 and IFRS 15, there are a number of dates to think about. For public companies it’s effectively 2018, while for private companies, adoption begins for accounting periods starting mid-December 2018 or later. Although the effective date is still a few years out, with retrospective adoption, it applies to contracts prior to the adoption date, with the need to show comparative financial statements in the years prior to adoption.

The new standards provide accounting guidance for all revenue arising from contracts with customers, from goods, to services. For contracts your organization is writing today or that already extend into the adoption date, or for those that have renewal terms, you need to understand how they look under the new guidance. The key is to understand your exposure, compare revenue under existing and new guidance, adjust the business processes that delay revenue, and reduce compliance risk through automation.

The core principle is that an entity should recognize revenue to depict the transfer of goods or services to customers in an amount that reflects the consideration to which the entity expects to be entitled in exchange for those goods or services.
— AICPA

While some companies will be able to implement the new standards with limited effort, most will find implementation to be a significant undertaking, especially if their internal accounting systems lack built in readiness.  Successful implementation will require an assessment and a plan for managing the change.

As a result of the new standard, entities will need to comprehensively reassess their current revenue accounting and determine whether changes are necessary.
— Deloitte

Technology companies must take special care with these rule[A1] [A2] s. They have broad impacts on multiple components of a high tech company’s typical contractual touchpoints and levers.  Whether you are a public company or a private company planning on going public, looking for additional funding, or planning to be acquired, this rule has the potential to reshape your revenue, and your valuation.

The FASB and IASB guidelines detail five steps to adoption  and each of these steps has specific implementations and considerations for technology companies.

Each of these steps has important implications for software companies and their business processes. In the next posts, we'll check out each of these steps and what it means for your revenue recognition processes, and your accounting systems.

 

 

Is Tableau the New Netscape?

Tableau has done for the discovery of data what Netscape did for the discovery of information, with the first web browser – empowered the masses. For data discovery, Tableau makes it simple to connect to some data, slice and dice it, and create some cool visualizations. It more than satisfies a simple equation for a software product:

Love = Results – Effort

That is, if the results for your users are way larger than the effort they put in, you have a winning solution: and Tableau kills it. Tableau’s timing was perfect, end user empowerment, the proliferation of data, just at the same time traditional command and control analytics was reaching a user frustration tipping point. Tableau provides an incredibly level of interactivity to “play” with the data, without requiring IT.

And there is one other timing aspect that Tableau has continued to capitalize on: a sustained vacuum of analytics vision from Microsoft, because they'd been asleep at the wheel around analytics. For a long time, Pivot Tables and Microsoft Analysis Services were the last great analytics innovations from Microsoft, and those introductions disrupted vendors (I worked at a vendor on the receiving end, and it sucked). But after those introductions, it has been a nuclear winter. That absence enabled Tableau to spawn a new industry – empowering users to explore data, and to thrive.

The Browser Wars of the Mid 90s

Similarly, when Netscape first appeared, with the growth of the Internet, Microsoft was essentially asleep at the wheel too. At the peak, Netscape had an 80%+ share of the browser market. Fearful that Microsoft was late to the Internet, Bill Gates led the led the call to arms with a letter to focus on the tidal wave. One of the areas: Netscape. The strategy was to put their full weight on changing Netscape’s dominance, with (love it or hate it) - Internet Explorer. Netscape quickly lost share as IE simply became the default - dropping to less than 1% share by 2006.

Netscape's Share of the Browser Market from 90's to 00's

Netscape's Share of the Browser Market from 90's to 00's

Gate’s Internet is Nadella’s Cloud and Data. One of the cornerstones of Microsoft’s strategy is not just cloud, with Azure (which now is second only to AWS) – empowering developers to create cloud services, but also tools and services to empower users to work with data.

The announcements around analytics have come quick and fast, PowerBI; PowerBI Desktop; PowerBI Mobile; PowerQuery; Azure Stream Analytics; Azure HDInsight; Azure Machine Learning; and Cortana Analytics.  For the PowerBI suite, the price is right - PowerBI is free, and PowerBI Pro is $9.99 per user per month – where you get more data, more refreshes, on premise connectivity, and more collaboration features.

The Coming Data Discovery War

So I tried out the web flavor of PowerBI a few months ago, bringing in some data from Salesforce into a prepackaged web dashboard, and it was cool, but to be honest the results were too limited – you couldn’t really play with the data enough. Definitely a threat to some cloud dashboard providers, but no threat to Tableau for real empowered data discovery. It’s more for consumption of analytics, but not playing with data. It fits into a data discovery framework, but isn’t the whole solution.

Fast forward to last week, where I tried out PowerBI Desktop. PowerBI Desktop is basically the equivalent of Tableau Desktop. And the interplay is similar, where users create rich analytics with the client, and then publish to the web to share the results.

But what blew me away was how PowerBI Desktop stacks up....

Let’s start with the data sources. They’ve done a great job of adding a huge number of sources – the usual suspects like Excel, text files and database sources, but also supporting a wide range of big data sources, social sources, ERP and CRM sources etc. It looks like they’re working with ISVs to add sources at a frightening rate. Getting access to data is often one of the big stumbling blocks for data discovery (and I think one of Tableau’s weaker areas) – and it looks like Microsoft is really focused on cracking the code here.

So then I thought I’d get my hands dirty and give it a little test drive with my favorite old time schema – Northwind (which I was pleased to see Microsoft still use for on-stage demos!). It’s a relational schema, and PowerBI Desktop did the automapping for me, then enabled me to easily make some changes to the joins. Nice and straightforward and very usable, and easy to visualize the relationships.

Finally, for the really fun bit, some data discovery. And this is where it was shockingly good. From soup to nuts, from data to dashboards, I built the quick example below in about 20 minutes. And it checks all the boxes. On the right is an easily field selector, there’s a rich array of visualizations – traditional charts, heatmaps, gauges, geospatial charts (more visualizations can be added by third parties) etc. All of the visualizations have strong data flexibility, so I could easily change the data that I’m seeing in then chart, filter it, use TopN/BottomN etc. I found myself easily slicing around the data, trying out different views, just like Tableau.

Some of the cooler stuff is how the dashboard components automatically snap together, with no effort at all, so for example, when I click on a region on the map, my other charts automatically orient, and it’s easy to create a book of dashboards, calculated measures etc.

Oh, and publishing is simple too.

So, is Tableau the New Netscape?

Which brings me back to the comparison at the start of all this. PowerBI Desktop does what 90% of people need to do with discovery tools, and it’s free, and nicely integrated with Office. So why use Tableau then? Sure, Tableau is still better in some areas for sure – more visualizations, it chooses the right chart automatically, Mac support, and I’d say it still has a slight edge in intuitiveness for data discovery. But here’s the kicker, Tableau is 10+ years old, PowerBI is 1.0 – and it’s tying into Microsoft’s broader strategy around Azure, Office365, and Cortana. Brutal.

I’m sure there’s chatter going on in the halls of Tableau on PowerBI. But to be sure, the threat from PowerBI perhaps means considering additional options around predictive analytics, or moving towards an applications strategy beyond tools.

Of course, if I were to take the Netscape analogy to its ultimate ending, out of the ashes of Netscape rose Firefox – which came to haunt Microsoft. I’m not sure this story will end in the same way.

Data Discovery: Warning Batteries Not Included

There were few things worse than the Christmas disappointment of frantically tearing opening a present to find out it was dead in the water - no batteries. Worse still, back in the day when I was a kid, there weren’t any stores open on the day. So in the absence of some forward planning with some on-hand batteries (usually unlikely), it meant a grindingly slow wait until the following day to get some satisfaction. From anticipation to disappointment in a few short seconds. These days toy manufacturers are smarter – they’ll just include them, thankfully.

Sometimes software can be prone to the same issue, and most recently data discovery tools in particular. Data Discovery has been one of the fastest growing segments within analytics, growing substantially faster than traditional Business Intelligence counterparts. And with good reason, data discovery adoption typically starts as a bottom up business user driven initiative. Adoption starts with a frustrated and enterprising analyst looking to explore or share some insight, caught between spreadsheets, and the absence of a useful (or existent) analytics initiative (which is usually too costly, too rigid, or just sat on the shelf), data discovery just makes sense to get success quickly.

The great thing about data discovery tools is they provide near instant short term satisfaction. From quick and easy setup, through to data visualization and exploration capabilities - from easy ad-hoc analysis, to cool geospatial visualizations and heat maps. With tools like Tableau you can get eye catching results incredibly quickly against spreadsheets, or connecting to a database, or cloud source like Salesforce. A business user can typically go from data to dashboard significantly faster than traditional Business Intelligence tools, because those tools require complex mappings and semantic layers, and require IT setup, before getting any joy.

In contrast to traditional BI tools, they eschew centralized data integration, metrics layers, and IT maintained business mapping layers. That’s the unglamorous stuff, that once it’s all done (which takes a lot of time!) is often too rigid to accommodate new ad-hoc data requirements, or perhaps misses the mark in terms of helping answer what analysts need asked when the need arises. The simple fact is that it is difficult to design an analytics initiative a priori – because you don’t necessarily know all the questions analysts will ask. It’s why data discovery has been so successful and been adopted so quickly.

What About Those Batteries?

It’s true, setting up all of that data integration, and semantic layers for users to interact with slows traditional BI deployments down. Also, having to prepare data, or optimize database schemas to get decent query performance, well that’s just plain thankless. Analysts just want to answer the questions they have, right now. And all of that plumbing just gets in the way of speed and autonomy.

So data discovery tools typically dispense with all that, but in doing so, they throw the baby out with the bath water – and there are consequences. Their value proposition is simply to point the tool at a spreadsheet, a text file, or a simple data source, or perhaps a cloud source like Salesforce, and start analyzing. The problem is that life in the long run is rarely that simple. And that nice shiny demo of the product often had hidden the real data integration complexity that it takes to get to that place. Because often even spreadsheets and text files need cleansing, opportunities or accounts in Salesforce need de-duping. Never mind perhaps joining together accounts across CRM or ERP systems. Or perhaps resolving complex joins across multiple tables (or databases). In emphasizing speed and autonomy, what’s lost is reuse, repeatability, and sharing clean data.

It’s Like Making a Battery Run to the Store. Daily.

What often happens, especially when data discovery tools get virally deployed across departments, is that IT, or the administrator of the data-sources (e.g. the Salesforce or ERP admin) in question often get left carrying the bag. It means repeated requests for an ad-hoc data extract, or for the analyst repeatedly grabbing an updated extract and then try and join it with other sources and cleanse it in spreadsheet hell. Over, and over again.

The organization turns into a culture of one-offs – a one-off extract for a few periods of data for some win-loss analysis, another extract for some product discounting analysis.  Analysts may end up performing weekly or monthly data prep and cleansing, just for their own activities, with no shared benefit for the rest of the organization. The business ends up with multiple data silos, and a lot of redundant effort. Multiple versions of the truth get created with every data discoverer using his/her own logic to cleanse and transform the data, and visualize.

Everyone ends up with cool visualizations to share (and impress the management team with!), but the organizational cost is high, with wasted time and redundant sets of conflicted data.

But things can be different with a little planning ahead.

Three Steps to Building a Batteries-Included Approach to Data Discovery

1)     Create a sustainable Data Discovery strategy

I’m not advocating building old school centralized BI (though it does have a role as part of a broader analytics strategy, more later) because data discovery tools fill a need to understand and explore data quickly. But organizations need to create a strategy around data, and encourage sharing of not just dashboards, but data too – to optimize for more reuse. So when the organization hits an inflection point in data discovery adoption, there is readiness to roll out user driven data prep tools like Paxata and Alteryx. These tools provide relief in terms of enabling business users not just to prepare their own data, and also automate common preparation activities, but to share it with others too. The outcome is shared pools of data that have been refined to handle common business questions. And better yet compared to traditional data warehouse initiatives, when data is prepared from the bottom up, and shared, you’ll often ended up with much more pragmatic and useful data to handle real-world business questions, based on a more democratic (and continually improving) process for improving the data pool.

2)     Identify data sources that need to be frequently analyzed and optimize for re-use.

One of the other keys is to identify which data requests have moved into inefficiency and dysfunction. For example run a quick poll amongst apps administrators, such as asking the Sales Ops Salesforce or Dynamics GP admins which data pulls for business users have become onerous. Perhaps there is a month end extract from multiple ERPs that requires merging continually every month, that's sucking up cycles in finance or ops. It’s also worth polling analysts to understand what kinds of recurring transformation and merging they’re performing – and which ones are duplicated across team members. The answers to these questions reveal what data tasks are candidates to be consolidated across teams or are opportunities for automation.

3)     Think Holistically about Analytics, Create a Journey

As we've seen, while laissez-faire based adoption of discovery tools can quickly create results quickly, it’s often not sustainable as adoption scales up. The truth is that there typically needs to be some ownership and data stewardship. In mid-size organizations it may mean an analytics strategy led by finance, perhaps consisting of using analytics that's embedded with the transactional apps, some centralized BI/reporting (for hardened shared metrics and reports), collaborative data pools, and data discovery tools. In larger organizations, it’s a prime area for IT to lay the foundation to support a sustainable bottom up data discovery strategy.

So before you go out shopping for that shiny new data discovery tool for the holidays, and think about rolling out across your organization, consider stocking up on batteries first, so your team will spend more time playing with visualizations, and less time stepping over each around data.

 

Using Benchmarking Analytics to Improve Accounting Productivity and Employee Engagement

Benchmarking, or comparative analysis, has been around a long time. It’s always been one of those promises made by software vendors with analytics, but typically unfulfilled in reality, in terms of real adoption or genuine usefulness.

Often it's the sizzle part of a dashboard demonstration for a vendor – where they can wow the audience showing how a company’s financial performance compares against industry averages, by importing data from a third party data provider and comparing key financial and management performance measures such as Revenue Growth, Profitability, Revenue Per Head etc.

While interesting – it turns out that it’s not that useful to many, hence the rather tepid adoption within analytics deployments. If your company is less profitable than your industry peers, you probably knew it already – and to actually find the root cause of the issue is often a separate project entirely – and where the real work is. The insights are often too far removed from where the real action is, in the departments where people and process are at work.

Benchmarking to Improve Actual Business Processes

So it’s with interest that I saw a demo of the newly launched Blackline Insights, at this week’s company InTheBlack conference in Atlanta. BlackLine is a cloud provider of solutions that automate and streamline the close process for accounting organizations – enabling then to automate millions of bank reconciliations, quickly resolve intercompany reconciliations, and take the overall manual effort out of the close process.

But this is where it gets really interesting, with 1,200 customers across 120,000 users they have a huge amount of data about the productivity and processes of those accounting organizations they serve. The kind of data we're talking about here are process measurements like on-time completion rate, average completed assignments, or average rejection rate. With benchmarking, BlackLine customers can see how their own accounting function stacks up with the broader community, by metrics, by industry, and organization size.

Creating a Level Playing Field Between Employees and Managers

The opportunity is to enable continuously improving efficiency through continual measurement. But the really good news is that it cuts both ways, because it also creates a level playing field in the accounting organization between employees and managers.

The reason is that in addition to enabling management to identify opportunities to improve the close process by identifying areas of underperformance or lower than average productivity, it can also be used to ensure management doesn’t have unreasonable expectations on what the team can realistically crunch through during the close - by measuring against what's actually realistic in the industry. It’s actual data that accounting staff can use to establish common ground for productivity expectations, and it equips all parties with data to set goals that everyone buys into.

For example, perhaps the team is burning the midnight oil to get reconciliations done, but management is setting higher goals. With benchmarking, they can look up the norms in their segment – and share it with management to justify hiring or operating more effectively as an organization – real employee empowerment. And management can set goals for accounting productivity not just on gut, but also comparing with other high performing companies – realistic goals that employees know have been established with rigor and fairness, so everyone gets bought in. Data drives decisions - in both directions. That's a little more democratic.

Business process benchmarking opens up a whole opportunity for measurement - from comparing the speed of close, industry error rates, responsiveness, or speed of resolution.  It even offers future opportunities around gamification, perhaps with badges and awards for achieving business process excellence, such as being in the top percentile of performance in the industry. There's even potential of translating measurable business process excellence into LinkedIn profile fodder! 

Down the line, linking accounting efficiency benchmarks with business performance measurements can finally provide linkage between company performance and accounting process performance, providing narrative to shift the accounting organization from cost center to value center.

The Cloud as Benchmarking Enabler

The cloud makes it possible for Blackline, because everyone is running on the same codebase, and the same platform, enabling metrics to quickly be aggregated across customer usage data. It takes all the hard work out of collecting, comparing and using the data for both BlackLine, and their customers.

Interestingly, this kind of benchmarking is incredibly hard to do using tools designed for an on premise world (or fake cloud solutions)– because it requires aggregating usage, and application level metrics, across customers: so centralization and a common code-base and schema are key. You also need to get to scale in terms of the number of customers across industries to make the data useful and insights.

It’s also a pretty big contrast to the old method of business process benchmark measurement -- using infrequent surveys from professional associations and analysts, because often the measures aren’t granular, typically not broken down by industry, and then you’ve got to reconcile the data (pun partially intended) between your own internal business process measures and the survey provider. In this area, it offers the opportunity for BlackLine themselves to actually be a benchmark data provider, and even provide narrative on trends in accounting organizations based on the data.

But one of the most interesting implications for solutions like BlackLine insights is fostering a sense of community amongst users. With everyone in the Blackline community running the same solution, for the first time it enables accounting team teams across organizations to compare stats, and share tips on how they moved the dial to improve them. Everyone is sharing performance metrics, on the same playing field, and using the same platform they can actually use to improve them.

Cloud has offered up the opportunity for better benchmarking for some time, and the intersection with business process and community offers compelling value It'll be interesting to hear stories of benchmarking in action at IntheBlack 2016.

Battling the Mega Vendor: When TOTO isn't Your Best Friend.

IMG_4753.JPG

Unfortunately, we won’t be talking about cute and fluffy Toto here, who L. Frank Baum, the author of The Wonderful Wizard of Oz described as a “Little black dog with long silky hair and small black eyes that twinkled merrily on either side of his funny, wee nose."

As you can see, this dog has an attitude. He’s called TOTO, and he stands for Turn Off The Oxygen. I prefer it over the equally unpleasant previous term I used to use, Poison the Well. Turns out I’m a sucker for a pronounceable acronym, especially one that initially summons up the visual of a cute dog, and turns out to be just the opposite.

TOTO is a strategy used by vendors to disrupt their competition. It’s a strategy I’ve been on the receiving end of many times, and it’ll keep you up at night. And unless you’re proactive and just as aggressive in counteracting, it can turn a hot market that you pioneered, into one that’s oxygen – and revenue, free.

Simply it’s this: When an upstart is 100% focused on disrupting a market with a bold new offering that poses a threat to a larger more well-heeled mega-vendor, the mega-vendor responds by offering what is typically an inferior product at zero or very low cost. They make it a no brainer from a buyer perspective. And because 100% of the revenue stream of the upstart is dependent on this product, while the mega-vendor has many other sources of revenue to insulate themselves, the upstart has a deer in the headlights moment, employees flee the ship, they get acquired, or disappear into obscurity. They got TOTO'ed.

The classic example, of a somewhat anti-competitive and a little too overt TOTO, happened to Netscape in the nineties. By Microsoft simply including Internet Explorer free with Windows, Netscape’s business model ceased to make sense. Similar TOTO strategies are in-play today, with the likes of Box vs. OneDrive, and Tableau vs. PowerQuery/Power BI, and with many other segments/vendors.

When it Really Starts to Bite

So what’s it like to be on the receiving end of this kind of strategy for B2B vendors? And what sales and marketing techniques can you use to counteract? Let’s start with what happens when the strategy is in play:

  • Deal velocity slows. The mega-vendor has entered the market with a solution that on-paper is similar to your own at a fraction of the price. Your decision maker/champion/buyer begins evaluating the alternative, or IT becomes involved in the deal as mega-vendor looks to roll the solution into a broader agreement. It becomes hard for your champion to justify to his/her peers, why your solution, and not the mega-vendor's. Yes, your features are “cool” but are they perceived as really worth what you’re charging versus free/dirt-cheap?
  • Price points drop dramatically. Whether you win the deal or not, the mega-vendor has readjusted the market price point – in the wrong direction. All vendors in the space react, reducing their price points to compete. A new, unpleasant norm is set for the street price for your class of solution. The oxygen begins to dissipate for everyone.

  • Sales frustration and desperation kick in. Suddenly, sales tools and materials that were adequate don’t cut it anymore. They worked great when the value proposition was so obvious to communicate and for the buyer to understand. Sales cycles that were quick wins turn into losses and no-decisions. And finger pointing begins between sales, marketing and product organizations on what to do. Everyone scrambles for air.

You get it, having the oxygen turned off really does suck. The solution is a combination of sales, marketing, and product strategy. I can't solve product strategy in this post – that’s a longer discussion, and definitely part of the solution. But there are things you can do, right now.

5 Strategies That are Preferable to a Dog Whistle

So how can you combat TOTO especially when you are up against a mega-vendor (which is the typical scenario). And how can you do it fast? Because let's face it, when this strategy is in play, time is a luxury you don't have:

  • Be easy to do business with.  Your mega-vendor is likely still as big and slow moving on the sales side of things as they were on the product side of things. Your sales cycle is where you can showcase how easy it is for the prospect to do business with you, see the product, get hands on and get their questions answered. It’s where the prospect gets to realize the difference between being a customer of yours, versus a customer of the mega-vendor - and there's real value to that.

  • Equip your decision maker. If your decision maker isn’t equipped with the knowledge to communicate the value of your product to his peers and communicate why it is a premium product and better than free, then you’ve got a problem. Will your product have materially less training costs, if so by what percent? Do they have a much stronger chance of success with you, if so, how? Is the ongoing maintenance with your solution substantially less, why? Your job is to equip your champion to fight against fee, communicate why the value (and ergo the cost is higher), and why it will make them (and the company) more successful than the alternative.

  • Get the prospect hands on. Often the mega-vendor wants to wrap the whole product in a big Master-Sales-Agreement and route it through IT – not even go through a formal evaluation or decision process, and blindside the business decision makers. It's how you get an inferior product over the finish line. You need to make your business champion - and their peers, fanatical about your solution – get them hands on, get them trained and vested on your solution. If it's better, they have to feel it's superiority - and just doing it with PowerPoint slides won't cut it. Make your prospect confident that your solutions means less risk. And then communicate the future unknown costs of the mega-vendor's solution, versus the known costs of yours.

  • Change up your sales tools. When you’re up against TOTO, the old sales tools you used to use won’t necessarily be effective anymore. You have to re-evaluate everything – are your customer success stories speaking to your value over free? Do you have a real ROI comparison versus the alternative? Are your sales team ready to change up the sales cycle getting the prospect hands on to prove out the value? Are you putting updated messages in the hands of your buyers?

  • Really focus on why your different and turn it into value. It’s not just your mega-vendor competitor’s price point that is low - your competitors price points will drop also, as they react to the new disruptive entrant. It’s why you need to communicate not just your differentiation – but what it means in terms of current and future value, more clearly than before.

Of course, there are other strategies you can use, like going negative in the face of the strategy - but to be honest, positively communicating your value, and equipping your decision maker is a better road. Negativity is not a sustainable strategy.

Believe it or not, there are some benefits to TOTO. Because the fit of desperation from the mega- vendor typically means large globs of accompanying press and marketing spend, which often means more opportunity and a bigger market and overall space.

But it’s your job to keep the oxygen turned on, and take advantage of the bigger market to play in.

Why Planning and Analytics are like PB&J

So this post is for the planning and analytics geeks out there. I enjoy watching software applications categories undergo fundamental change, where real innovation starts to appear. And the nexus of planning and analytics is where this is happening.

Often the drivers behind these kind of changes can be technology based – such as the rise of mobile, or perhaps social or economy based – such as the rise of the self-employed economy. But when these external drivers gather momentum they often disrupt software categories. Some people go with the flow, others try and fight it.

With that in mind I read with interest an article recently that made the case for Analytics (or for the old school among us: Business Intelligence) and Budgeting/Planning applications being two separate worlds and really not needing to be unified together in a single application, that putting them together is just hype, not useful.

It was on the heels of a set of announcements from SAP, with SAP Cloud for Planning bringing together both analytics and planning – large scale analytics, data visualization, modeling and planning under the same unified hood, underpinned by SAP HANA. I personally think there is real innovation to be had at this nexus of analytics and planning, but more on that a little later.

The crux of the case in the article, was that Analytics is for the tech guys who get big data, data prep, data warehousing, SQL, unstructured and structured data etc., while Planning is for Finance, who worry about drivers, financial allocations, forecasts etc.

Different Disciplines, Different People?

I get it, my background hails from the data-warehousing, Business Intelligence, and Online Analytical Processing (OLAP). And to be honest, financial planning was a different world.  When I built dashboards and analytics for organizations, (typically for the crew in IT) there was often a separate planning implementation going on in the room next door for Finance. Each side looked with somewhat distain over at the other (I preferred to write SQL than think about cost allocations).

When Business Intelligence first emerged in the mid-90s it was built by tech, for finance-IT – we’re talking star-schemas, semantic layers, and all that good stuff – distant from the world of finance. While when the first packaged planning apps for finance appeared, they were built as apps. New technology at the time like OLAP databases were optimized for modeling and what-if? analysis for finance – but had fewer dimensions and detail for analysis and weak ad hoc analysis, while big iron driven data-warehouses were optimized for large scale analysis, but couldn’t handle changes in assumptions inherent in the modeling and planning process.

So when the rounds of vendor consolidation with Business Objects, Cognos, and Hyperion happened in the mid 00’s it was two (often more) Business Intelligence (BI) and Corporate Performance Management (CPM) stacks. Two different categories, different skill sets, different code-bases. Vendors glued these two stacks together with a veneer of branding and single sign-on to make them like a suite, but they were really different code-bases and experiences beneath the thin integration.

Change is Underway.

But just the same way that NetSuite and Workday are reimagining their respective categories in ERP and HCM for the new economy, the same is beginning to gather pace in CPM. In ERP for example, eCommerce capabilities increasingly need to work with the ERP seamlessly, from web storefront to order -- because a digital storefront is often strategic. And HCM apps need to be mobile-first in an increasingly self-service world. So, CPM is undergoing a similar transformation, just differently.

CPM is changing because planning itself has to be more responsive, more in tune than ever before with the operating environment. And that requires analytics.

A recent Hackett Group survey showed that about a third of companies intend to implement rolling forecasting over the next few years. Combined, Hackett saw over half of companies building some kind of rolling forecasting process. Hackett attributed it to increased competitive pressures on companies, and faster moving markets. Companies want to see now just further out, they want to see their forecast adjusted based on a continually changing environment.

So doing a yearly plan/budget isn’t good enough anymore either. And because organizations are increasingly moving to rolling forecasts, it means ingesting ERP, HCM, and CRM data increasingly frequently. And more frequent planning and the push for more accurate forecasting means responding to external data too. Not all of this data needs to be in the plan itself, but the planning professional must be able to update planning drivers, change assumptions, and make course corrections in the face of the larger data landscape that they are expected to respond to - and they need to see that environment clearly.

The data landscape they're making decisions on is larger than before, and they’re being asked to re-plan and respond to that landscape faster. Planning no longer takes place in a vacuum, and it takes place more frequently, and closer to the business.

The dashboard vendors don’t have it easy either. Because standalone dashboards aren’t really good enough anymore either – they don’t have a call to action in them – just seeing a chart isn’t good enough – the expectation is you’ll do something about it. You either take action in your system of record – that’s why providers like NetSuite, Workday, and Salesforce provide embedded analytics. Or you plan and adjust based on those insights, using engines that combine analytics and planning, like SAP Cloud for Planning, Anaplan, and Adaptive Insights. But a standalone run-of-the-mill web based dashboard environment (and standalone planning environment) is deteriorating in value.

But really reimagining planning and analytics as a single unified solution means starting with a clean sheet of paper. Providers like SAP are taking the lead. Remember those data stores I mentioned earlier, one optimized for planning and the other optimized for large scale analysis? Well in-memory columnar databases like SAP HANA offer the opportunity to do both in the same database and data model, which makes it easier to model and plan in the context of large scale analytics. With data visualization operating on the same data store that's being used for analysis and planning, it's a potentially potent combination, blurring the lines between analysis and modeling.

So to do this right, it really helps to have a unified system – one database engine and model – the same engine serving both the analytics and the planning, one set of common definitions, one unified user experience, one business dictionary across both. It’s no longer just gluing these systems together anymore - like what happened over a decade ago, they have to be rethought in the context of where planning and analytics are headed, and designed together.

For once, this isn’t just vendor hype. As the nature of planning changes, a new opportunity opens up to rethink the systems that enable it.

Now time for that PB&J.

 

 

 

The Fake Cloud Comes to Budgeting and Planning Applications

Legacy on-premises providers are feeling the heat, as more and more businesses worldwide continue to migrate to the cloud for added agility, greater collaboration, and faster data analysis.

This current cloud momentum has left many legacy players playing catch-up. They’re frantically migrating their products to the “cloud” – but it’s really just the “hosting” of old. The truth is that they’re gluing together old products to a delivery model that was never designed to work together. Worse still, they’re marketing it as if it is a real cloud solution. The “cloud-washing” phenomenon has now come to the budgeting, planning, consolidation, and business intelligence space, where legacy providers are warming up two-decades old software, painting puffy cloud pictures in brochures and presentations, and hoping their prospective customers can’t spot the difference. The truth is, you can’t just move on-premise software to a datacenter, and call it “cloud”. Ultimately, the customer is the loser in this scenario.

Even the media is fed up with fake cloud providers that try to pass as SaaS vendors.

Why? Because there are real, meaningful differences between solutions born and bred in the cloud, and those that were forced into the cloud to try to keep up with today’s business needs. Customers who are unable to navigate through the sea of SaaS-queraders and who are fooled by the fakers are destined to be stuck with expensive, antiquated solutions to run their businesses.

So the question is this:

Can You Spot Fake Cloud Budgeting and Planning Applications?

Here are four warning signs to look out for:

Fake cloud budgeting and planning solutions are much more difficult to use.

For a budgeting and planning solution to be successful, finance needs to be able to make change independently. That means creating new plans, allocations, or dashboards without IT or a busload of consultants. Fake cloud solutions still carry their complex heritage. Running them in the vendor’s data center still means a complex and IT intensive user experience for you in these areas:

  • Building financial plans
  • Updating security settings
  • Creating reports
  • Writing allocations and formulas
  • Make changes to business structures
  • Tuning the application for performance

One easy way to sniff out a fake cloud? Look for multiple administration consoles, non-browser based tools to administer the app, and large amounts of IT facing/technical administration functionality. Even better – ask to take a free trial, and watch the SaaS-Queraders scratch their heads, wondering how it’s even possible given their solution’s complexity.

In contrast, a true cloud solution is designed from the ground up for business users to manage and change the application themselves because it had to be designed that way. If you can take a Free Trial – and be using the application within just a few minutes with your own data, then you can be pretty confident it’s a real cloud solution.

A conversation with a reference customer starts with “What version are you running?”

With fake cloud solutions, all on-premise/hosted customers are on different versions. It’s much harder to share knowledge and best practices when your peer is running a different version of the software – one that might be 5 years old. In fact, when fake cloud providers “release” new software, their on-premise customers wait years to upgrade and each hosted instance requires upgrading separately - often an onerous and risky process. With cloud-native solutions, 100% of customers are always on the latest release. Everyone is speaking the same language, creating a strong community for sharing tips, tricks, and adopting the latest functionality.

Fake cloud solutions are often an “operations horror” behind the scenes.

With fake cloud solutions, you don’t want to see what’s going on behind the curtain – it’s often ugly. Transition of all that IT “ops” complexity is kept away from you. Good for you – but bad for the vendor – because those old premise solutions were never designed to run “as-a-service”, or be easy on IT. Often, fake cloud solutions each need many instances for each customer, that each requires its own “care and feeding,” and personal upgrading. The fake cloud vendor quickly ends up with hundreds, or even thousands of instances. Each instance also requires personal patching, fixing, and maintenance. It’s incredibly easy for customizations and optimizations to break during an upgrade because the applications weren’t designed with easy upgrades in mind.

In contrast, true cloud solutions are multi-tenant with a single code-base that’s designed to automatically migrate customizations with each new release.

A slower pace of innovation.

The best cloud companies innovate faster than fake cloud providers. Why? Because they can focus on one codebase and one platform. Imagine a world where your development team has to maintain 4 or 5 different versions of the software to support all the different customers running those versions. And imagine if all those versions were on different platforms and operating systems – Windows, Linux, Solaris, Oracle, DB/2, SQL Server. It’s a matrix of complexity, which saps innovation and resources.

Contrast that with a real-cloud vendor. All of the customers are on the same version and platform. It means a 100% of the R&D team focuses on improving the application that YOU are running, not someone else’s code-base.

Use these four tips are a starting point to avoid getting burned by the fake cloud - and do your own research, as they're merely the tip of the iceberg as far as warning signs to be wary of.

Fake Cloud Vendors are on a Catastrophic Course — and their Customers are Riding Shotgun

In outlining the 4 warning signs of fake cloud solutions in my last blog, I left out one key point: The catastrophic course that the fake cloud plots for both the vendors, and their customers.

Over the last 10 years, I’ve seen vendor after vendor, take the eerily similar, ill-advised, three-step path to fake cloud catastrophe – and their customers are ultimately the losers. Once they embark on the path to the fake cloud, these vendors are on a direct path to failure.

So what is the path that every fake cloud vendor (and their customers) almost invariably follows?

Step 1: “Our customers and partners are begging us for a cloud solution!? No biggie, we’ll host it.”

Seeing their traditional businesses in precipitous decline while true cloud providers grow in customer base, revenue, and market share, legacy vendors spin-up an ops team to host the solution. The ops team – typically specialists in hosted software operations – say, “No problem!” Most on-premise vendors are far into this step already.

The company starts spinning-up hosted software instances. Marketing knocks out a subscription-based price list and creates the requisite collateral littered with calming images of fluffy clouds. All done, right? Wrong.

After a few initial sales, the ops team realizes that hosting is much more than a question of hosting infrastructure.

  • “Customer are complaining about performance – how do we tune the actual application anyway? Adding more memory to the server didn’t help.”
  • “We have to individually upgrade hundreds of customer instances? What are the application level considerations for each customer?”
  • “How do we manage SLAs across our 100 instances, and how are we going to do it when we reach 1,000 and beyond? We need to hire 100 more people to keep these plates spinning!”
  • “Customers are asking us to make changes because it’s too difficult to do it themselves?! This wasn’t meant to be a managed services agreement!”
  • “Engineering, you have to help us. We can’t successfully scale this system AND feed each customer instance – do something!”
  • “Hey engineering, those patches and fixes you delivered to customers without worrying about applying them – guess what? We have to do it now – and it sucks.”

The business guys look across at the real-cloud providers and say, “We need to put the pedal to the metal on this cloud thing!” However, operations has already raised the red flag. Costs are outrageously high, customer satisfaction is shockingly low. Something has to change. So…

Step 2: “No problem, we can make our old solution more ‘cloudy’.”

This step is almost inevitable once a vendor has sold the fake cloud. Some bright spark develops a multi-year roadmap to the cloud, which includes a few “optimizations” to make the existing product run more cost effectively, more reliably, and more user-friendly as a service. Salvation!

  • “We’ll make it multi-tenant, so it’s easier for ops to deliver.”
  • “We’ll provide tools to help ops run it more effectively.”
  • “We’ll make the app more intuitive for business users to take the pressure off of our own team.”

It’s just not that easy. No vendor has successfully turned an on-premise application into a real, multi-tenant, self-service cloud app, and scaled up to thousands of customers. Old applications are innately complex because they’re running on several code bases – Java, C++, client server, N-tier – each of which is decades old and never meant to run in a multi-tenant environment.

This is the point at which the vendor starts their great “cloud experiment”; trying to service the airplane while it’s in flight and their customers are on-board.

Can you really blame the engineering team; a group of on-premise software developers trying to build a cloud solution for the first time? Turning their solution into a self-service, cloud application requires multiple UI’s just to hide the complexity, but those UI’s create more complexity on their own.

Throughout this process, innovation comes to a grinding halt. Adding features that customers really want takes a backseat to getting the Frankenstein-like “cloud” solution off the ground. Overall complexity increases, while customer satisfaction decreases around a solution littered with patches, fixes, and bolt-ons.

Finally, the number of customers that the vendor is supporting rises to precipitous levels. Step 2 isn’t working out. Customers are asking for free-trials like the real-cloud vendors offer. They’re looking for real features that solve their problems, not endless cloud fixes and kludges. So it’s on to step 3.

Step 3: “Hmm, this didn’t pan out. We need to build a new solution from the ground up.”

Just when you thought it couldn’t get any uglier, the vendor reaches the inevitable dead end 2-3 years after first launching the fake cloud solution. Business leaders are panicking, realizing they’ve performed a failed surgery on their antiquated application. Their product is a monster of complexity, and it’s practically impossible to support on-premise and hosted customers while delivering a reliable, cost-effective, self-service solution. Customer satisfaction continues declining, and it has become clear that the future is 100% cloud.

It’s time for Project Re-do: Build a brand new, cloud-based solution.

This is going to take at least another 2-3 years to complete. Additionally, the vendor can’t simply retire the old solution because there are hundreds of customers using it and still paying maintenance fees. Those customers won’t see any new features while the vendor dedicates all resources to creating a new cloud-based solution, at which point they’ll have to re-implement that new cloud app.

No innovation + re-implementation + a fledgling cloud app = No fun for years to come.

Then the pain for the vendor really begins. They have two solutions – a highly functional and mature old app – runs great on-premise and badly in the cloud. And a brand new app that runs better in the cloud, but with far less functionality.

In the meantime, true cloud vendors – using one code-base and years of SaaS experience – continue to build a worldwide customer base. They’ve steadily innovated and improved their solutions, adding new features that meet customers’ needs. By the time legacy vendors are ready to launch a cloud app, it’s too late to get in the game. And that’s why, one legacy vendors embark on the fake cloud, they are destined to crash. There’s only one question left to answer: Will you join them on that ride?

An Insider’s Guide to Buying Enterprise Software: Seven Tips to Set Yourself Up for Success

keys.jpg

After being in the software industry for nearly 20 years – having sold software, demoed it, implemented it, developed it and marketed it - and bought quite a bit of it too, I’ve always been surprised how many buyers don’t follow some basic ground rules when they’re making such a big decision. Asking some basic questions and doing some key research yourself can make all the difference between success and failure.

Let’s face it, it’s easy to build some requirements that you need, and then glide along the sales process – sit through the vendor sales presentations, see the demos, read the success stories, get a few references, and then sign the quote. But the best buyers really get inside the process - and follow some key ground rules. So let’s get started:

  1. Never Ever Buy on Futures. The first golden rule of buying software is always structure your buying decision on what functionality the vendor provides today based on your current needs. And always take what vendors promise you’ll get in the future with a very large pinch of salt. Product roadmaps come and go, new product leadership arrives, business priorities change and those plans are often notorious for fluctuating too – all the more important to buy what’s shipping today. Many vendors are going to be most aggressive and “blue sky” about promises on their roadmap when they really want your business – but don’t fall into the futures trap.
  2. Ask “Do You Have Customers like me?” This is your opportunity to really do your diligence, and mitigate your risk. Find out what other customers are really like you, or have achieved the same results, in a similar operating environment as you. Are they using the same ERP or CRM system as you? Did they do a similar integration? Are they the same size and in the same industry? What partners did they use to make themselves successful? Don’t just limit it to similar industry – get granular. You’ll learn the pitfalls, but you’ll also sleep better at night knowing that you’re not an “edge case” customer.
  3. Ensure there’s Headroom to Grow – Right Now. Ensure that there are other customers like you who have scaled higher, or are doing more today. You don’t want to be the one pushing the functional and scalability boundaries of the software as you grow – otherwise you’ll be spending a lot of time submitting tickets to the vendor. You want the software already to have been there, done that. So don’t just check references for customers similar to you or check for customers that are in your industry and your current size– check the ones that are where your company/project wants to be in 3-5 years’ time.
  4. Use Your Network and Social to do Your Research. In the old days, you’d have to rely on that customer success story that’s been polished by the vendor, or participate on that carefully prepped reference call that the vendor sets up for you. Or perhaps you had to network at expensive conferences, seminars or work through your contact list to get some advice. Things have changed dramatically in the last few years. Review sites like trustradius.com provide much more direct feedback than you’ll ever get from a vendor provided reference. Or use professional communities like Proformative.com, who help financial professionals network around financial management software and best practices. Better yet, get plugged into LinkedIn groups and see what your peers have to say, or even use Quora to ask the questions. If a vendor has no reviews or mixed reviews on these sites, ask why is that? Are you buying PR hype or real software that delivers value? As a buyer, you’re more empowered than ever before with these tools, but it surprises me how few people use them to drive their decisions.
  5. Know the Meaning behind the Buzzwords – Before You Are Influenced by Them. There’s a whole industry around tech buzzwords – cloud, multi-tenancy, in-memory, and big data etc. The hype is driven from everywhere – analysts, media, and vendors who all have an interest in talking about the next big thing. Now, don’t get me wrong, as a former engineer, I love technology - but as a prospective customer you really need to understand the technology if you’re making a purchasing decision based on it. For example, if you’re buying a cloud solution you need to know the differences: Is it built for the cloud, or is it on-premise software that’s hosted? Is the difference important to you? Do you care? Is the product really a “big data” solution – or is the vendor just jumping on the bandwagon? The lesson here is to get the facts behind the buzz, before buying into it.
  6. Understand the Pros and Cons of Being One of the First. There always has to be a first customer of any solution. First can mean many different things – maybe you’re the first customer that the vendor has on their new solution. Or perhaps you’re the first with 1,000 users. Or maybe you’re first using a brand new feature. There’s a plus side of being first – you’ll get more attention from the vendor, you might get a break on the quote, because they want to make you successful. The downside is that you’ll get more than the typical share of frustration. So you have to ask yourself how strong a position you are with your anticipated project to absorb those kinds of obstacles as they come up, and how much faith do you have in the provider to support you as you push the boundaries of their software.
  7. Get Hands On with What You are Buying. Sure, the vendor is provides you with experts who’ll demo the product, and perhaps build a customized demo based on your requirements. But you really need to know what’s going on behind the curtain, and see how sausage is being made as it were – otherwise that shiny demo you saw could be only that, a slick demo from an ever slicker pre-sales guy. Instead of seeing that report you asked for in your requirements, try and build it yourself. Nothing beats going to a workshop, or asking for a free trial, and playing with the product – it’s the only way you’ll get the real perspective.

Use these ground-rules, and you’ll have a solid foundation for success for your next decision.

What’s your perspective on other questions buyers should ask?

The Technology Shift of 1915: Lessons We Can Learn 100 Years Later

In 1915 there were more than 4,600 of these companies in the United States alone, a competitive and vibrant industry that had supported tens of thousands of workers, unchallenged for hundreds of years. Entire careers revolved around them – with professionals and specialists, skilled in the complexity and artistry behind their products. An entire supply chain existed – from suppliers, to manufacturers, to makers of essential artefacts like the “buggy whip” to support them.

Flint and Cincinnati led production, while a city such as Amesbury in Massachusetts was one of many thriving towns, a home for more than 26 companies in the 1850s alone, with long forgotten businesses like George Adams & Sons, Loud Brothers, H. G. & H. W. Steven and William Chase & Sons humming away struggling to meet spiraling customer demand.

The industry peaked in the 1890s with 13,000+ companies. Yet by 1925, there were just 150 of these companies in the US, and by 1929, just 88. In just 15 years, 98% of an industry had been wiped out.

The industry is of course that of the “chaisemaker.” Even the name itself has fallen out of use (Microsoft Word is telling me I have a typo!) More commonly, you’ll know the industry as that of the carriage makers.

Some within the carriage making industry saw the automobile as simply a passing fad and couldn’t imagine a future without the horse drawn carriage. Few understood the fundamental technology shift that was underway and how it would disrupt their businesses to the core.

While many of the carriage makers were much better capitalized, had great commercial reach and distribution, better brand recognition than their auto-making peers, they almost all failed to adapt to a technology shift. Just a few made the leap. In fact, Studebaker was one of only two top ranked carriage makers that embraced the destruction of their old business, eventually retooling their entire production to manufacture automobiles instead. Companies that tried to hang on to the past, or simply apply old world skills and technology to the new world simply failed to exist.

Not just the carriage makers were wiped out. But the entire ecosystem that supported them. The “buggy whip” was an essential accessory for any erstwhile coach driver, an entire cottage industry in itself, the industry and the phrase itself crumbling in the face of disruption.

And here we stand today, another disruption underway, from on-premise software, with an entire ecosystem that surrounding it, to cloud computing. What lessons can we learn from the shift from carriage to car?

  • Only those that embrace creative destruction will make the shift. The carriage makers that didn’t invest in retooling their production failed. Most were too busy protecting their existing, dying, revenue streams. The same holds true today, with some of the largest software vendors desperately trying to carry 20 year old software into the future. The carriage makers that simply attached engines to their old wagons didn’t make the shift — just like those that are gluing old world software to the cloud won’t either. The one’s that designed autos from the ground up did – and very few did so. What made you successful in the past isn’t the same formula for the future. Leaders like Apple understand this — by essentially destroying the iPod with the introduction of the iPhone, they discarded the old to build the new, but became even more successful as a result. Many of today’s legacy software leaders have not learned the same lesson.
  • The transition is much faster than anyone expects. Over the course of 15 years, from 1914 to 1929 an entire industry basically ceased to exist. That’s akin to a staple of the year 2000 sliding into the dust today – or perhaps today’s cars essentially being replaced by self-driving cars by the mid-2020s. The pace of changing can be disconcerting. Those that have spent their entire career in an industry invariably underestimate the breadth, depth and speed of change. The speed of disruption and the unwillingness to put aside the old, no longer applicable technology is a potent combination, bringing organizations to their knees much faster than thought possible. Innovators like Google with a self-driving vehicle, and Tesla Motors with an electric vehicle designed from the ground up understand this, while the old automakers do not.
  • New innovators emerge out of nowhere, faster than the old world leaders expect. At the beginning of the carriage makers decline, in 1913, Henry Ford introduced the first moving assembly line. By 1923, US auto production had reached 3.7 million units, with the Ford Model T accounting for 52% of production. By 1926 there were more than 23 million cars on the road. By 1929, just as the carriage makers were taking their final breath, Ford and General Motors were undisputed leaders, both of them slugging it out, and GM netting $250M in profit in just that year alone. The closest analog today, leaders like Salesforce, NetSuite, Workday and Adaptive Insights, building leadership positions in their respective cloud categories, CRM, ERP, HCM and CPM. Leadership can change hands disconcertingly quickly.

The lesson? Embracing change is no half measure, but holding on to the past is more risky than embracing the future. Oh, and definitely at all costs don’t be the “buggy whip” maker.

Does Your Sales Presentation Have a Vanity Issue?

Let’s spend a moment cracking open your sales presentation. Does it start with how great your company is? Or perhaps how many awards and customer logos your company has acquired? Or maybe how fast your company’s growth is, your funding sources, or how successful your management team has been? 

If so (and believe me, you’re not alone), your sales presentation has a vanity problem – and it’s probably costing you deals. In fact, over the last 20 years, it’s never ceased to amaze me how many sales presentations still have an ego.

Because let’s face it, your prospect doesn’t want to hear how great you are: They want to hear how great you’ll be at making them successful. And spending their valuable time chest thumping, rather than using that time to demonstrate you really understand their pain, and proving how you can solve it, is valuable time wasted. In fact, it's often one of the biggest missed opportunity's for really differentiating your product and services, and building trust.

So let’s make the shift from putting your company and products at the center of your sales presentation, to putting your customer at the center.

Here are my six golden rules for creating a successful sales presentation.

  • Keep it brief. Yep, it’s true – people don’t like to sit through an hour of slides, with little interaction. Try cutting your presentation down to 10 slides or less – and have every slide as an opportunity to ask questions, keep it interactive, and drive engagement. And it goes without saying, the less text the better.
  • Start with the pain, and then tell them how you can help. Lead with the pain your audience faces (preferably tailored based on a brief chat with them) - the big drivers, and how you can help - and I mean really help (i.e. metrics, not with vague old benefits bullets).
  • Know your audience. Ensure your sales presentation is role and industry tailored. Because being too general signals the prospect that you really don’t understand (or worse still, don’t truly care) about their needs.
  • Bring your customer stories to life, and tell memorable stories of success. Customer stories can be amazing - the most powerful part of your presentation - in fact they can be what your prospect remembers most – bring them to life through storytelling and imagery (but make sure it's tailored to your audience!).
  • Try and keep the product stuff to a short real demo. Because people don't buy and use slides, they buy and use products. The more you show slides about your products, the more it looks like you don't have the confidence in your products. Showing product shows confidence. Use it to prove your benefits.
  • Save your company chest thumping to last (and keep it brief) –  What it's really about is using that time to ensure the prospect has confidence you can deliver on the stories and promise of success that you have shared - i.e. even when you talk about your company, it's still about your customer!

If you’d like help rethinking your corporate, sales, or product presentations, ping  me and I can help!

Welcome to the Next Wave of Cloud Disruption

I have to say, “disruption” is one of my least favorite words. But stay with me, because what is happening in the cloud business applications industry right now really is that. Because it turns out what some have called the last great computing architecture for business apps isn’t necessarily the last. There is another cloud wave coming, and it promises to disrupt todays cloud vendors to the core.

Let’s Start by Going Way Back

The cloud in its current form has essentially experienced two recent technology waves. The first in the 1990s and early 2000s which was essentially hosting.

While hosting essentially dates back to the 1960s, led by Big Blue and other providers, in the 1990s vendors were more efficiently able to deliver their applications as a service. The change was enabled by the more ready availability of third party hosting infrastructure and cheaper computing.

In those days, the vendor might host using their own internal data center – with a significant capital cost and risk, or better, use one of the early Application Service Providers to host the application for them. In this way, the vendor was able to offer their application, without the overhead of building their own data center. As the costs of using third party data center providers dropped, more on-premise application vendors were able to offer their existing on-premise apps through this approach - extending all the way to today.

Yet, because many of these applications weren’t really designed to be hosted, they were often limited compared to their on-premise counterparts, and were onerous for the vendor to manage – lack of virtualization technology, or multi-tenancy often meant the proliferation of hardware and application instances required to support customers, while the user experience was often subpar as the application wasn’t typically 100% designed as a web application.

For the sake of simplicity, we’ll call these solutions “First Generation Cloud”. And as I‘ve pointed out in previous posts, it’s very hard for a first generation provider (and there are many around today) to transform their stack while carrying over their existing customers and success to the next wave...

The Birth of the Real Cloud – the Second Generation

The late 90’s saw the birth of the real cloud in the form we see it today for B2B applications. That is, business applications designed from the ground up to run as a service, over the Internet. While essentially still using the same physical delivery mechanism as the first generation – using third party data center facilities, the real cloud saw vendors building applications that were simply much more efficient to deliver at scale. They employed multi-tenancy -- enabling a single application instance or codebase to run many customers. This innovation made efficiently onboarding and maintaining thousands of customers operationally feasible. Further, because they were designed for the web it meant a user experience that was 100% web based – so accessible from anywhere.

These vendors built their new applications on the technology/delivery stacks of choice for the delivery of software as a service at the time: They ran in third party data center facilities. They built and maintained the hardware and application infrastructure themselves – buying the hardware racks, configuring clusters, capacity planning. They used the choice operating systems, and most trusted scalable (and expensive) database and middleware technology available, like Oracle, to ensure they could scale up and out. Typically these companies invested and maintained significant Operations organizations - to monitor capacity, security, and unscheduled downtime, manage and create new clusters.

The barrier to entry for any real-cloud vendor using this generation of stack, was still exceptionally high – they had to work with (and pay) a data center provider, buy hardware, procure an application infrastructure stack, and have talented engineers build multi-tenant codebases right down at the database level, while hiring an Ops team to manage security and availability. Expensive.

And because these applications led from the front in the cloud revolution, they also built a lot of the “context” application technology themselves from the ground up to support their "core" application, because other web services simply didn’t exist at the time. Basically, you pretty much had to build everything for your application, soup to nuts. There were little off the shelf cloud application components and tools that you could use could augment your app to easily add value.

Enter the Third Wave
– The Commodity Applications Cloud

While the first decade of the 2000s saw the rise of the cloud, the second decade has seen the explosion of the cloud. But this isn’t just a fundamental change in software buyer behavior. The last few years have seen a dramatic acceleration in both how fast new cloud vendors can deliver apps to market, and how quickly they can deliver powerful value and world class functionality.

The first driver is that the barrier to entry to deliver a cloud app has dropped enormously.  The rise of Amazon Web Services, now a $5BN business for Amazon, as well as other Infrastructure-as-a-Service platforms like Microsoft Azure eliminate the need for them to do deal with costly co-located data centers, and the need to provision, buy and depreciate expensive hardware. You can get started at almost no cost and no economic risk.

The third-generation get better elasticity, and avoid the need to buy infrastructure resource ahead of peak periods. And they get core operational security and availability out of the gate. This is everything that the second-generation cloud provider had to build and manage themselves. It means they can spend valuable resources on apps, not infrastructure.

But there’s more. The cost of application infrastructure is dramatically less – with databases like MongoDB, PostgreSQL, OrientDB, ArangoDB, or MySQL that are feely available, and provide the right level of functionality, at a dramatically reduced cost than the traditional database providers used by the second generation. The same goes for the rest of the application infrastructure stack too. Less cost, and also simply more accessible for developers to get started with. And modern application frameworks like Ruby on Rails make it easier than ever to develop and iterate web applications – which their second-generation counterparts often aren’t built on.

Finally, the third-generation have the benefit of being built in a world where everything is now as a service, and service enabled. Need to add analytics to your application, why build it? How about Amazon QuickSight or PowerBI? Need to add predictive analytics, why not Microsoft Azure Machine Learning? Need to add an Excel interface? Why not simply plug into service enabled Microsoft PowerQuery? Need to roll up data? Maybe RedShift? Want workflow? Why not incorporate KiSSFLOW or any number of other workflow tools? Need data integration, why not simply plug into SnapLogic or MuleSoft? The list goes on and on.

What’s changed is what the second generation previously had to build (and now maintains) themselves is now available off the shelf for the third generation. Simply,  the new breed of commodity applications cloud provider can innovate faster than their second-generation predecessors. There's definitely some irony here, for sure.

What It Means

This transformation is structural. In my opinion, it’s just as hard for a second-generation cloud provider to move to third, as it was for the first generation cloud to make the shift. The competitive implications are substantial.

It means an increased velocity of new providers entering existing software application categories with new cloud applications, built using commodity infrastructure and incorporating rich application services provided by other vendors. It's simply easier to enter a market than it it ever was before, and enter with a strong, competitively priced offering

It means the third generation can attack their bigger second generation cloud competitors with applications capability built on the shoulders of others, while focusing a much larger percentage of their R&D on their vital areas of differentiation, rather than building and maintaining technology and application capabilities that have essentially been commoditized.

And ironically, it offers first-generation cloud providers a way to get back in the game - it's never been easier to get into the cloud.

It means more competition. It means more choice for buyers. It means more innovation. And yes, it means more disruption.

Welcome to the commodity applications cloud.

What the Rising Tide of Machine Learning Means at the Corporate Cubicle

As a bit of a machine learning/intelligence geek, over the weekend I read with interest the news on the MIT Data Science Machine, a set of algorithms that look for interesting patterns in data.  And it brought to mind what I'd observed on the cubicle floor recently (more of that later).

What the Data Science Machine was proving out was how well pattern recognition in data can be cognitively automated, versus typical human cognition augmented with analysis tools. For example, within raw sales data, an interesting simple pattern may be that different types of products sell better at different times of year, depending on location, and perhaps those sales are influenced by drivers not present in the data, such as external factors like weather. The search for trends or patterns that are predictive is the stuff that careers are made of. The question was, who's better? Humans + analysis software, or just software.

"The machine" acquitted itself well, beating 615 out of 906 human teams, and achieving results 94 percent and 96 percent as accurate as the best human counterparts. More interestingly,  "where the teams of humans typically labored over their prediction algorithms for months, the Data Science Machine took somewhere between two and 12 hours to produce each of its entries." I think most of us would take 96% as accurate, for the advantage of that step function increase in speed - and without the cost of labor.

And so, as the proud owner of about 3lbs of wetware that was comfortably attuned to only manufacturing or low level numerical jobs being displaced by automation, it signals that the rising tide of machine intelligence encroaching on the white collar is perhaps nearer than we think. In fact, I'll share a real life experience on what feels like later on.

One Man's Meat is Another Man's Poison

Nick Bostrom, in his book Superintelligence: Paths, Dangers and Strategies, made the prediction of the economic impact of machine intelligence, observing that “what is seen as a boon from one person’s perspective can be seen as a liability from another’s” and portends a profound effect on the economy. The excited advocates of a new technology often have a hard time thinking about what it means for the people on the receiving end of it. And, people’s reactions to change can get really interesting, really quickly.

We’ve seen this script before: at the dawn of the 19th century. The economic shock of the introduction of the power loom and spinning frames slashed the salaries and work prospects of the hand weavers and artisans that preceded them. They (understandably) took their anger out on the machines that were disrupting their livelihoods, signing their acts of sabotage with “Ned Ludd did it.” - spawning the (inevitably short lived) Luddite movement.

The Rational Luddite

Prior to the power loom, a handloom weaver could produce 20 yards of cloth in an hour. After the initial introduction, a power loom could produce 40 yards in the same amount of time. The result was immediate: those that stuck to the old methods of productivity effectively had their pay slashed in half.

Contrary to popular thought, the Luddites behaved not irrationally, but rationally in the face of an unfavorable economic shift with a collective (and rather destructive):

“W-T-F”. 

And almost precisely 200 years later, we’re beginning to see the same human response in the face of machine learning – and soon, more disruptively, deep learning. First shock and disbelief, perhaps a little embarrassment, and sometimes even anger. 

The acceleration in machine learning and intelligence promises greater productivity improvements than perhaps those even experienced at the beginning of the industrial era. Algorithms can increasingly create more accurate inferences,  predictions, and recommendations, often based on vastly more datasets that our software assisted wetware can sift through. Comparable or better results, with less labor, and a step function increase in speed.

The Rising Tide on the Cubicle Floor

So what does it mean for traditional corporate roles for example – perhaps in the Halls of Analysis, and how will people react in the face of it? Let’s get back to that real-life example which I observed recently.

Typically, medium to large organizations have an investment in teams building forecasts. Accuracy is prized and increased accuracy farther out into the future, is prized even more.

The tools du jour are mostly spreadsheets, or statistical tools like SAS and R, or data mining tools to build predictive models depending on the organizations level of sophistication. In this one example, the company had a team using in house proprietary models and analysis tools to build their forecast. Management was naturally looking to increase their forecasting accuracy, so wanted to try out advanced machine-learning techniques. They wanted to see if an automated predictive model assisted using big data could could unlock some competitive advantage.

But it turned out the new machine-learning based model really sucked. In fact, it was worse than the in-house models that the organization was already using.

Case closed. These guys already had a solid lock on their forecast. Nothing to see here.

Wrong. The in-house forecasting team had deliberately provided the erroneous data to throw the machine-learning algorithm off. After correcting the issue, it turns out the new approach was logging a substantial improvement in accuracy, and doing it near instantaneously. 

Need less to say, as this was going on, the large in-house team was frantically looking for other roles in the organization, seeing the writing on the wall of their models (and corporate role) being replaced by a more accurate technology that required fewer people and took less time – all while frantically trying to conceal the embarrassment of how much their old models were off.

My prediction: We’re going to be seeing a lot more of that.

It’s worth turning back to the implications from the Luddite Movement. Some people either struck out in anger or blocked change (understandable, but not a successful or sustainable strategy), others readjusted to the rising tide -- working with the machines rather than competing with them. However, the amount of change so quickly had profound societal impacts, and as the pendulum swung further into the employer having leverage, it sowed the seeds for the labor movement.

Only those that swam to higher ground were able to economically survive.

7 Practical Tips for Creating Competitive Strategies That Your Sales Team Will Love

When was the last time you took a hard look at your company's competitive strategies and tools? And when was the last time you got direct feedback from your sales team on whether they're really effective or not?

Done right, competitive tools can influence the most valuable opportunities - the ones where there is budget and timing, where real money is going to be spent - preferably with your organization, not a competitor. Done badly, they can (rightly) be an incredible source of frustration for sales teams who are on the receiving end of lower sales effectiveness.

My simple acid test is this:

You have a prospect meeting tomorrow. Do you have full confidence that the competitive strategies you have on hand will materially impact the deal in your favor. 

Typical complaints that I’ve heard from sales teams who are living this acid test  include:

  • The information is too vague to be effective.
  • It's too verbose and hard to digest.
  • It’s simply not hard hitting enough to influence an opportunity.
  • Some of the points are outdated or just plain incorrect.
  • The competitor already neutralized the competitive points.

Sometimes there is no outright owner for competitive analysis, or ownership is diffused across the organization (in both cases typically problematic).  In other cases, even when there is an owner, the competitive intelligence is heavily discounted by sales as simply not sharp enough to be effective - the "owner" is just too far away from where the rubber meets the road.

So, here are my top tips for creating a competitive process that the sales team will love (and will love you for driving!):

  1. Be proactively competing - even before there is a competitor. Your sales organization should be setting the agenda from the moment they walk in the door, to deposition the anticipated competition. For example, is it a SMB opportunity or a large enterprise one? Is it for a senior decision maker, or a mid-level one? Are the competitors broadly different based on these segments? If you carefully define your segments and map your key messages and differentiators to them, your sales people can be proactively competing and setting traps -- so they are set up to win from the very first meeting.
  2. Map your competitive strategies into every sales stage from initial meeting to close. A competitive sales tool doesn’t stand on it’s own. Your sales person should be differentiating from the very first engagement, because if they wait until the last moment to set the agenda, they’ve probably already lost the deal. So think each stage of the sales cycle – from setting the agenda in the first presentation, to calling out differentiators in a demo, to benefit justification tools, customer proof points, to final bake off. It's a whole a competitive playbook so that competing and differentiating is a journey, not an event.
  3. Structure a team so that the competitive intelligence is relevant, right and always up-to-date. The most grievous issue I’ve seen is simply when sales doesn’t buy into the competitive information they’re equipped with. So, I’d recommend establishing a team per major competitor and use collaborative tools to stay current. The team should consist of prospect facing roles like sales and sales consulting (or even an outside win/loss firm if you can make it work - more on that in a future post) who are dealing with the competitor every day. If you’re driving the project, your job is to collect and distill that tribal knowledge, and do it continually. That way, you get buy in from the sales team, and you have confidence in the strategies you're delivering.
  4. Be timely. Getting timely competitive info out in front of sales is incredibly valuable – perhaps it’s an award, or an analyst report, or maybe a competitive misstep. Regardless, get the information out quickly, and provide sales with the messages on how to position it – that way it can directly influence sales opportunities that are in flight, right now.
  5. Put yourself in the salespersons shoes when you’re creating a competitive cheat sheet. It can’t be too verbose or abstract (I’ve seen a 30 page competitive guide before – who has the time to read it?!) – it has to be pithy material that can be quickly absorbed and effectively delivered.

    Key items to include: Two sentence elevator pitch; Competitive switch names / soundbite; Competitive win sound bites (ideally industry specific); Metrics (see below); Analyst proof points; 5-10 quick competitive points; 5-10 responses to incoming competitive Fear, Uncertainty and Doubt (FUD); and key points to show and reiterate in a demos.
  6. Create some quick memorable tribal metrics. Do you have many more customers than a particular competitor? Or perhaps grow much faster according to an analyst? Maybe your R&D is much higher, or your customer satisfaction is stronger on 3rd party review sites like TrustRadius. Institutionalize these as metrics and share.
  7. Turn a Negative into a Positive The temptation to “bash” the competition is often felt from sales. But study after study has shown that prospects respond differently, and often negatively to bashing – and besides, if the prospect selects the competition, you still don’t want to have impacted a future sale by damaging your brand. Focus on equipping your prospect to make a decision based on your benefits, rather than bashing the competitors that your prospect is considering.

Solid competitive strategies are one of the key areas where marketing or sales operations can really move the dial with sales. Let me know if you need a little help breaking the inertia with your competitive strategies!