Analytics

Is Tableau the New Netscape?

Tableau has done for the discovery of data what Netscape did for the discovery of information, with the first web browser – empowered the masses. For data discovery, Tableau makes it simple to connect to some data, slice and dice it, and create some cool visualizations. It more than satisfies a simple equation for a software product:

Love = Results – Effort

That is, if the results for your users are way larger than the effort they put in, you have a winning solution: and Tableau kills it. Tableau’s timing was perfect, end user empowerment, the proliferation of data, just at the same time traditional command and control analytics was reaching a user frustration tipping point. Tableau provides an incredibly level of interactivity to “play” with the data, without requiring IT.

And there is one other timing aspect that Tableau has continued to capitalize on: a sustained vacuum of analytics vision from Microsoft, because they'd been asleep at the wheel around analytics. For a long time, Pivot Tables and Microsoft Analysis Services were the last great analytics innovations from Microsoft, and those introductions disrupted vendors (I worked at a vendor on the receiving end, and it sucked). But after those introductions, it has been a nuclear winter. That absence enabled Tableau to spawn a new industry – empowering users to explore data, and to thrive.

The Browser Wars of the Mid 90s

Similarly, when Netscape first appeared, with the growth of the Internet, Microsoft was essentially asleep at the wheel too. At the peak, Netscape had an 80%+ share of the browser market. Fearful that Microsoft was late to the Internet, Bill Gates led the led the call to arms with a letter to focus on the tidal wave. One of the areas: Netscape. The strategy was to put their full weight on changing Netscape’s dominance, with (love it or hate it) - Internet Explorer. Netscape quickly lost share as IE simply became the default - dropping to less than 1% share by 2006.

 Netscape's Share of the Browser Market from 90's to 00's

Netscape's Share of the Browser Market from 90's to 00's

Gate’s Internet is Nadella’s Cloud and Data. One of the cornerstones of Microsoft’s strategy is not just cloud, with Azure (which now is second only to AWS) – empowering developers to create cloud services, but also tools and services to empower users to work with data.

The announcements around analytics have come quick and fast, PowerBI; PowerBI Desktop; PowerBI Mobile; PowerQuery; Azure Stream Analytics; Azure HDInsight; Azure Machine Learning; and Cortana Analytics.  For the PowerBI suite, the price is right - PowerBI is free, and PowerBI Pro is $9.99 per user per month – where you get more data, more refreshes, on premise connectivity, and more collaboration features.

The Coming Data Discovery War

So I tried out the web flavor of PowerBI a few months ago, bringing in some data from Salesforce into a prepackaged web dashboard, and it was cool, but to be honest the results were too limited – you couldn’t really play with the data enough. Definitely a threat to some cloud dashboard providers, but no threat to Tableau for real empowered data discovery. It’s more for consumption of analytics, but not playing with data. It fits into a data discovery framework, but isn’t the whole solution.

Fast forward to last week, where I tried out PowerBI Desktop. PowerBI Desktop is basically the equivalent of Tableau Desktop. And the interplay is similar, where users create rich analytics with the client, and then publish to the web to share the results.

But what blew me away was how PowerBI Desktop stacks up....

Let’s start with the data sources. They’ve done a great job of adding a huge number of sources – the usual suspects like Excel, text files and database sources, but also supporting a wide range of big data sources, social sources, ERP and CRM sources etc. It looks like they’re working with ISVs to add sources at a frightening rate. Getting access to data is often one of the big stumbling blocks for data discovery (and I think one of Tableau’s weaker areas) – and it looks like Microsoft is really focused on cracking the code here.

So then I thought I’d get my hands dirty and give it a little test drive with my favorite old time schema – Northwind (which I was pleased to see Microsoft still use for on-stage demos!). It’s a relational schema, and PowerBI Desktop did the automapping for me, then enabled me to easily make some changes to the joins. Nice and straightforward and very usable, and easy to visualize the relationships.

Finally, for the really fun bit, some data discovery. And this is where it was shockingly good. From soup to nuts, from data to dashboards, I built the quick example below in about 20 minutes. And it checks all the boxes. On the right is an easily field selector, there’s a rich array of visualizations – traditional charts, heatmaps, gauges, geospatial charts (more visualizations can be added by third parties) etc. All of the visualizations have strong data flexibility, so I could easily change the data that I’m seeing in then chart, filter it, use TopN/BottomN etc. I found myself easily slicing around the data, trying out different views, just like Tableau.

Some of the cooler stuff is how the dashboard components automatically snap together, with no effort at all, so for example, when I click on a region on the map, my other charts automatically orient, and it’s easy to create a book of dashboards, calculated measures etc.

Oh, and publishing is simple too.

So, is Tableau the New Netscape?

Which brings me back to the comparison at the start of all this. PowerBI Desktop does what 90% of people need to do with discovery tools, and it’s free, and nicely integrated with Office. So why use Tableau then? Sure, Tableau is still better in some areas for sure – more visualizations, it chooses the right chart automatically, Mac support, and I’d say it still has a slight edge in intuitiveness for data discovery. But here’s the kicker, Tableau is 10+ years old, PowerBI is 1.0 – and it’s tying into Microsoft’s broader strategy around Azure, Office365, and Cortana. Brutal.

I’m sure there’s chatter going on in the halls of Tableau on PowerBI. But to be sure, the threat from PowerBI perhaps means considering additional options around predictive analytics, or moving towards an applications strategy beyond tools.

Of course, if I were to take the Netscape analogy to its ultimate ending, out of the ashes of Netscape rose Firefox – which came to haunt Microsoft. I’m not sure this story will end in the same way.

Data Discovery: Warning Batteries Not Included

There were few things worse than the Christmas disappointment of frantically tearing opening a present to find out it was dead in the water - no batteries. Worse still, back in the day when I was a kid, there weren’t any stores open on the day. So in the absence of some forward planning with some on-hand batteries (usually unlikely), it meant a grindingly slow wait until the following day to get some satisfaction. From anticipation to disappointment in a few short seconds. These days toy manufacturers are smarter – they’ll just include them, thankfully.

Sometimes software can be prone to the same issue, and most recently data discovery tools in particular. Data Discovery has been one of the fastest growing segments within analytics, growing substantially faster than traditional Business Intelligence counterparts. And with good reason, data discovery adoption typically starts as a bottom up business user driven initiative. Adoption starts with a frustrated and enterprising analyst looking to explore or share some insight, caught between spreadsheets, and the absence of a useful (or existent) analytics initiative (which is usually too costly, too rigid, or just sat on the shelf), data discovery just makes sense to get success quickly.

The great thing about data discovery tools is they provide near instant short term satisfaction. From quick and easy setup, through to data visualization and exploration capabilities - from easy ad-hoc analysis, to cool geospatial visualizations and heat maps. With tools like Tableau you can get eye catching results incredibly quickly against spreadsheets, or connecting to a database, or cloud source like Salesforce. A business user can typically go from data to dashboard significantly faster than traditional Business Intelligence tools, because those tools require complex mappings and semantic layers, and require IT setup, before getting any joy.

In contrast to traditional BI tools, they eschew centralized data integration, metrics layers, and IT maintained business mapping layers. That’s the unglamorous stuff, that once it’s all done (which takes a lot of time!) is often too rigid to accommodate new ad-hoc data requirements, or perhaps misses the mark in terms of helping answer what analysts need asked when the need arises. The simple fact is that it is difficult to design an analytics initiative a priori – because you don’t necessarily know all the questions analysts will ask. It’s why data discovery has been so successful and been adopted so quickly.

What About Those Batteries?

It’s true, setting up all of that data integration, and semantic layers for users to interact with slows traditional BI deployments down. Also, having to prepare data, or optimize database schemas to get decent query performance, well that’s just plain thankless. Analysts just want to answer the questions they have, right now. And all of that plumbing just gets in the way of speed and autonomy.

So data discovery tools typically dispense with all that, but in doing so, they throw the baby out with the bath water – and there are consequences. Their value proposition is simply to point the tool at a spreadsheet, a text file, or a simple data source, or perhaps a cloud source like Salesforce, and start analyzing. The problem is that life in the long run is rarely that simple. And that nice shiny demo of the product often had hidden the real data integration complexity that it takes to get to that place. Because often even spreadsheets and text files need cleansing, opportunities or accounts in Salesforce need de-duping. Never mind perhaps joining together accounts across CRM or ERP systems. Or perhaps resolving complex joins across multiple tables (or databases). In emphasizing speed and autonomy, what’s lost is reuse, repeatability, and sharing clean data.

It’s Like Making a Battery Run to the Store. Daily.

What often happens, especially when data discovery tools get virally deployed across departments, is that IT, or the administrator of the data-sources (e.g. the Salesforce or ERP admin) in question often get left carrying the bag. It means repeated requests for an ad-hoc data extract, or for the analyst repeatedly grabbing an updated extract and then try and join it with other sources and cleanse it in spreadsheet hell. Over, and over again.

The organization turns into a culture of one-offs – a one-off extract for a few periods of data for some win-loss analysis, another extract for some product discounting analysis.  Analysts may end up performing weekly or monthly data prep and cleansing, just for their own activities, with no shared benefit for the rest of the organization. The business ends up with multiple data silos, and a lot of redundant effort. Multiple versions of the truth get created with every data discoverer using his/her own logic to cleanse and transform the data, and visualize.

Everyone ends up with cool visualizations to share (and impress the management team with!), but the organizational cost is high, with wasted time and redundant sets of conflicted data.

But things can be different with a little planning ahead.

Three Steps to Building a Batteries-Included Approach to Data Discovery

1)     Create a sustainable Data Discovery strategy

I’m not advocating building old school centralized BI (though it does have a role as part of a broader analytics strategy, more later) because data discovery tools fill a need to understand and explore data quickly. But organizations need to create a strategy around data, and encourage sharing of not just dashboards, but data too – to optimize for more reuse. So when the organization hits an inflection point in data discovery adoption, there is readiness to roll out user driven data prep tools like Paxata and Alteryx. These tools provide relief in terms of enabling business users not just to prepare their own data, and also automate common preparation activities, but to share it with others too. The outcome is shared pools of data that have been refined to handle common business questions. And better yet compared to traditional data warehouse initiatives, when data is prepared from the bottom up, and shared, you’ll often ended up with much more pragmatic and useful data to handle real-world business questions, based on a more democratic (and continually improving) process for improving the data pool.

2)     Identify data sources that need to be frequently analyzed and optimize for re-use.

One of the other keys is to identify which data requests have moved into inefficiency and dysfunction. For example run a quick poll amongst apps administrators, such as asking the Sales Ops Salesforce or Dynamics GP admins which data pulls for business users have become onerous. Perhaps there is a month end extract from multiple ERPs that requires merging continually every month, that's sucking up cycles in finance or ops. It’s also worth polling analysts to understand what kinds of recurring transformation and merging they’re performing – and which ones are duplicated across team members. The answers to these questions reveal what data tasks are candidates to be consolidated across teams or are opportunities for automation.

3)     Think Holistically about Analytics, Create a Journey

As we've seen, while laissez-faire based adoption of discovery tools can quickly create results quickly, it’s often not sustainable as adoption scales up. The truth is that there typically needs to be some ownership and data stewardship. In mid-size organizations it may mean an analytics strategy led by finance, perhaps consisting of using analytics that's embedded with the transactional apps, some centralized BI/reporting (for hardened shared metrics and reports), collaborative data pools, and data discovery tools. In larger organizations, it’s a prime area for IT to lay the foundation to support a sustainable bottom up data discovery strategy.

So before you go out shopping for that shiny new data discovery tool for the holidays, and think about rolling out across your organization, consider stocking up on batteries first, so your team will spend more time playing with visualizations, and less time stepping over each around data.

 

Using Benchmarking Analytics to Improve Accounting Productivity and Employee Engagement

Benchmarking, or comparative analysis, has been around a long time. It’s always been one of those promises made by software vendors with analytics, but typically unfulfilled in reality, in terms of real adoption or genuine usefulness.

Often it's the sizzle part of a dashboard demonstration for a vendor – where they can wow the audience showing how a company’s financial performance compares against industry averages, by importing data from a third party data provider and comparing key financial and management performance measures such as Revenue Growth, Profitability, Revenue Per Head etc.

While interesting – it turns out that it’s not that useful to many, hence the rather tepid adoption within analytics deployments. If your company is less profitable than your industry peers, you probably knew it already – and to actually find the root cause of the issue is often a separate project entirely – and where the real work is. The insights are often too far removed from where the real action is, in the departments where people and process are at work.

Benchmarking to Improve Actual Business Processes

So it’s with interest that I saw a demo of the newly launched Blackline Insights, at this week’s company InTheBlack conference in Atlanta. BlackLine is a cloud provider of solutions that automate and streamline the close process for accounting organizations – enabling then to automate millions of bank reconciliations, quickly resolve intercompany reconciliations, and take the overall manual effort out of the close process.

But this is where it gets really interesting, with 1,200 customers across 120,000 users they have a huge amount of data about the productivity and processes of those accounting organizations they serve. The kind of data we're talking about here are process measurements like on-time completion rate, average completed assignments, or average rejection rate. With benchmarking, BlackLine customers can see how their own accounting function stacks up with the broader community, by metrics, by industry, and organization size.

Creating a Level Playing Field Between Employees and Managers

The opportunity is to enable continuously improving efficiency through continual measurement. But the really good news is that it cuts both ways, because it also creates a level playing field in the accounting organization between employees and managers.

The reason is that in addition to enabling management to identify opportunities to improve the close process by identifying areas of underperformance or lower than average productivity, it can also be used to ensure management doesn’t have unreasonable expectations on what the team can realistically crunch through during the close - by measuring against what's actually realistic in the industry. It’s actual data that accounting staff can use to establish common ground for productivity expectations, and it equips all parties with data to set goals that everyone buys into.

For example, perhaps the team is burning the midnight oil to get reconciliations done, but management is setting higher goals. With benchmarking, they can look up the norms in their segment – and share it with management to justify hiring or operating more effectively as an organization – real employee empowerment. And management can set goals for accounting productivity not just on gut, but also comparing with other high performing companies – realistic goals that employees know have been established with rigor and fairness, so everyone gets bought in. Data drives decisions - in both directions. That's a little more democratic.

Business process benchmarking opens up a whole opportunity for measurement - from comparing the speed of close, industry error rates, responsiveness, or speed of resolution.  It even offers future opportunities around gamification, perhaps with badges and awards for achieving business process excellence, such as being in the top percentile of performance in the industry. There's even potential of translating measurable business process excellence into LinkedIn profile fodder! 

Down the line, linking accounting efficiency benchmarks with business performance measurements can finally provide linkage between company performance and accounting process performance, providing narrative to shift the accounting organization from cost center to value center.

The Cloud as Benchmarking Enabler

The cloud makes it possible for Blackline, because everyone is running on the same codebase, and the same platform, enabling metrics to quickly be aggregated across customer usage data. It takes all the hard work out of collecting, comparing and using the data for both BlackLine, and their customers.

Interestingly, this kind of benchmarking is incredibly hard to do using tools designed for an on premise world (or fake cloud solutions)– because it requires aggregating usage, and application level metrics, across customers: so centralization and a common code-base and schema are key. You also need to get to scale in terms of the number of customers across industries to make the data useful and insights.

It’s also a pretty big contrast to the old method of business process benchmark measurement -- using infrequent surveys from professional associations and analysts, because often the measures aren’t granular, typically not broken down by industry, and then you’ve got to reconcile the data (pun partially intended) between your own internal business process measures and the survey provider. In this area, it offers the opportunity for BlackLine themselves to actually be a benchmark data provider, and even provide narrative on trends in accounting organizations based on the data.

But one of the most interesting implications for solutions like BlackLine insights is fostering a sense of community amongst users. With everyone in the Blackline community running the same solution, for the first time it enables accounting team teams across organizations to compare stats, and share tips on how they moved the dial to improve them. Everyone is sharing performance metrics, on the same playing field, and using the same platform they can actually use to improve them.

Cloud has offered up the opportunity for better benchmarking for some time, and the intersection with business process and community offers compelling value It'll be interesting to hear stories of benchmarking in action at IntheBlack 2016.

Why Planning and Analytics are like PB&J

So this post is for the planning and analytics geeks out there. I enjoy watching software applications categories undergo fundamental change, where real innovation starts to appear. And the nexus of planning and analytics is where this is happening.

Often the drivers behind these kind of changes can be technology based – such as the rise of mobile, or perhaps social or economy based – such as the rise of the self-employed economy. But when these external drivers gather momentum they often disrupt software categories. Some people go with the flow, others try and fight it.

With that in mind I read with interest an article recently that made the case for Analytics (or for the old school among us: Business Intelligence) and Budgeting/Planning applications being two separate worlds and really not needing to be unified together in a single application, that putting them together is just hype, not useful.

It was on the heels of a set of announcements from SAP, with SAP Cloud for Planning bringing together both analytics and planning – large scale analytics, data visualization, modeling and planning under the same unified hood, underpinned by SAP HANA. I personally think there is real innovation to be had at this nexus of analytics and planning, but more on that a little later.

The crux of the case in the article, was that Analytics is for the tech guys who get big data, data prep, data warehousing, SQL, unstructured and structured data etc., while Planning is for Finance, who worry about drivers, financial allocations, forecasts etc.

Different Disciplines, Different People?

I get it, my background hails from the data-warehousing, Business Intelligence, and Online Analytical Processing (OLAP). And to be honest, financial planning was a different world.  When I built dashboards and analytics for organizations, (typically for the crew in IT) there was often a separate planning implementation going on in the room next door for Finance. Each side looked with somewhat distain over at the other (I preferred to write SQL than think about cost allocations).

When Business Intelligence first emerged in the mid-90s it was built by tech, for finance-IT – we’re talking star-schemas, semantic layers, and all that good stuff – distant from the world of finance. While when the first packaged planning apps for finance appeared, they were built as apps. New technology at the time like OLAP databases were optimized for modeling and what-if? analysis for finance – but had fewer dimensions and detail for analysis and weak ad hoc analysis, while big iron driven data-warehouses were optimized for large scale analysis, but couldn’t handle changes in assumptions inherent in the modeling and planning process.

So when the rounds of vendor consolidation with Business Objects, Cognos, and Hyperion happened in the mid 00’s it was two (often more) Business Intelligence (BI) and Corporate Performance Management (CPM) stacks. Two different categories, different skill sets, different code-bases. Vendors glued these two stacks together with a veneer of branding and single sign-on to make them like a suite, but they were really different code-bases and experiences beneath the thin integration.

Change is Underway.

But just the same way that NetSuite and Workday are reimagining their respective categories in ERP and HCM for the new economy, the same is beginning to gather pace in CPM. In ERP for example, eCommerce capabilities increasingly need to work with the ERP seamlessly, from web storefront to order -- because a digital storefront is often strategic. And HCM apps need to be mobile-first in an increasingly self-service world. So, CPM is undergoing a similar transformation, just differently.

CPM is changing because planning itself has to be more responsive, more in tune than ever before with the operating environment. And that requires analytics.

A recent Hackett Group survey showed that about a third of companies intend to implement rolling forecasting over the next few years. Combined, Hackett saw over half of companies building some kind of rolling forecasting process. Hackett attributed it to increased competitive pressures on companies, and faster moving markets. Companies want to see now just further out, they want to see their forecast adjusted based on a continually changing environment.

So doing a yearly plan/budget isn’t good enough anymore either. And because organizations are increasingly moving to rolling forecasts, it means ingesting ERP, HCM, and CRM data increasingly frequently. And more frequent planning and the push for more accurate forecasting means responding to external data too. Not all of this data needs to be in the plan itself, but the planning professional must be able to update planning drivers, change assumptions, and make course corrections in the face of the larger data landscape that they are expected to respond to - and they need to see that environment clearly.

The data landscape they're making decisions on is larger than before, and they’re being asked to re-plan and respond to that landscape faster. Planning no longer takes place in a vacuum, and it takes place more frequently, and closer to the business.

The dashboard vendors don’t have it easy either. Because standalone dashboards aren’t really good enough anymore either – they don’t have a call to action in them – just seeing a chart isn’t good enough – the expectation is you’ll do something about it. You either take action in your system of record – that’s why providers like NetSuite, Workday, and Salesforce provide embedded analytics. Or you plan and adjust based on those insights, using engines that combine analytics and planning, like SAP Cloud for Planning, Anaplan, and Adaptive Insights. But a standalone run-of-the-mill web based dashboard environment (and standalone planning environment) is deteriorating in value.

But really reimagining planning and analytics as a single unified solution means starting with a clean sheet of paper. Providers like SAP are taking the lead. Remember those data stores I mentioned earlier, one optimized for planning and the other optimized for large scale analysis? Well in-memory columnar databases like SAP HANA offer the opportunity to do both in the same database and data model, which makes it easier to model and plan in the context of large scale analytics. With data visualization operating on the same data store that's being used for analysis and planning, it's a potentially potent combination, blurring the lines between analysis and modeling.

So to do this right, it really helps to have a unified system – one database engine and model – the same engine serving both the analytics and the planning, one set of common definitions, one unified user experience, one business dictionary across both. It’s no longer just gluing these systems together anymore - like what happened over a decade ago, they have to be rethought in the context of where planning and analytics are headed, and designed together.

For once, this isn’t just vendor hype. As the nature of planning changes, a new opportunity opens up to rethink the systems that enable it.

Now time for that PB&J.