Splunk on Volume, Variety, Velocity

Australia’s corporate sector has taken to burgeoning Big Data market in the same fashion it has taken to other waves of new technology; as early innovative adopters, according to Daniel Miller, the local country manager of Nasdaq-listed big data veteran Splunk.

Big Data issues are not just about data volume. The variety of different kinds of data, and the speed at which it is growing are added complexities. It is unstructured and ballooning.

Consider that every 60 seconds Google serves more than 694,445 search queries, or that 600 videos are uploaded on YouTube videos, adding more than 25 hours of content, or that 168,000,000 emails are sent.

Or that every 60 seconds 695,000 status updates are published on Facebook: or 79,364 wall posts and 510,040 comments. And those numbers don’t include all the machine-generated data that results from each of those human creations.

Set up in Australia just two-and-a-half years ago with a single employee, Splunk has been one of the most active technology transfer drivers for Big Data in this country. It has quietly created a significant, fast-growing subsidiary with nine employees and a trajectory that will double head count again in the next 12 months.

“Australia has always been an early innovative adopter of new technology, and that has been no different with Big Data,” Miller said.

Splunk is bringing one of its senior commercial and technical experts to Australia to speak at the Big Data Conference in Sydney on October 31 to November 1, an event presented by the CeBIT Global Conferences group.

But where most of the Big Data vendors and services providers tend to be consultants on the Google-inspired Hadoop and Maproute systems, Splunk brings different products and different value propositions to its customers. It has its own distributed indexing architecture and its own search language.

Like the rest of the Big Data sector, Splunk focuses on unstructured data – but its real focus is in the massive volumes of machine-generated data. According to research group IDC, about 90 per cent of the data in today’s organisations is machine generated – by websites, applications, servers, networks, mobile devices and the like.

The whole value proposition of Big Data rests on the notion that these massive pools of unstructured data hold tremendous value, Mr Miller says. The trick to unlocking that value rests in being able to handle such massive data volumes, being able to handle lots of different types of data, and being able to handle the sheer speed at which new data is being generated.

Splunk’s Enterprise product collects, monitors, indexes and analyzes the machine data generated by IT applications and infrastructure–physical, virtual and in the cloud. This machine data is massive in scale and contains a definitive record of all transactions, systems, applications, user activities, security threats and fraudulent activity. This data is largely untapped; Splunk helps organisations unlock its value.

Mr Miller says the Splunk platforms differ from competitors in that they contain an in-built dashboard and tools, as well as specialist visualization features that simplify the trending and results information generated by vast pools of raw data.

As the largest information technology user in the country, the Federal Government is expected to become a big user of Big Data toots, and will be a focus for Splunk.

Mr Miller says local customers are often reluctant to talk about precisely what they are using Splunk tools for – sometimes because they are still in test “suck it and see” mode, and sometimes because they don’t want to reveal a competitive advantage.

But the company has made strong inroads in the University sector – especially as Big Data systems are well suited to development and research environments – and Mr Miller expects strong growth among public sectors customers in the coming year.

See the latest in Big Data Innovations and trends at the Big Data Conference in Sydney on 31 October – 1 November 2012.