Last week, I attended UNC Charlotte’s Analytics Frontier’s Conference. The event once revolved around concepts of “big data” and has since evolved to cover more practical applications of insights derived from modern analytics and data science.

Three major themes jumped out to me this year:

Analytics is now table stakes.

The basics have been commoditized and no longer represent a competitive advantage. Keynote speakers David Kiron and Sam Ransbotham pointed out that the opportunity for competitive advantage now comes from going further, because getting the right information to the right person at the right time (GRITRPART) is no longer sufficient. With this comes a rise in data products (and data product managers) on the market and a demand for consultative services aimed at solving real business problems with advanced analytics.

Data is the easy part.

Not long ago massive data sets represented a significant technical challenge. In an analogy to Maslow’s hierarchy of needs, the basic physiological needs of data generation, storage, and access have been largely solved. Everything we do these days throws off data. Everything is measurable. Today’s issues revolve more around complexity of data, data silos, security implications, and unknown unknowns. That last one is a big issue. It is not always clear what business problem an analyst can solve with a given set of data, and often times factors that go into those decisions are largely unknown at the start of analysis. Panelist Bill Ribarsky from Bank of America spoke of applications for knowledge synthesis models that enable analysts to guide statistical learning through interactive data visualizations. In this world, interaction design becomes critically important to overall data product design and system architecture.

Human-machine augmentation is the future.

This was the most raised theme of the event across session topics. Machine learning and artificial intelligence (AI) are all the buzz these days. However, the machines may only be able to take us so far. Keynote speaker, Tom Davenport, said that machines may win at “depth” of knowledge, but humans still win at “breadth.” He points out that the AI machine that beat the world champion at the immensely complicated game “Go” earlier this year cannot play checkers. Hybrid systems where advanced cognitive products deliver insights to a human for interpretation and direction on next steps will deliver enormous productivity gains. Skeptically, I wonder if this sentiment is self-preservational thinking by the people that would be most displaced. However, it makes sense that if competitors all run off “the algorithm,” the opportunity to get ahead must come from nuanced discovery, interpretation, and action that results from such analysis. Davenport also pointed out that while underwriting has largely been automated for more than a decade, there are still underwriters.

San Francisco dominates tech. Los Angeles dominates media. New York dominates finance. When you think “data science,” no city yet dominates as a capital for expertise, but Charlotte stands out (along with Columbus, Ohio apparently as mentioned by one panelist) in this field. With the prevalence of data-rich industries such as banking, healthcare, and energy, Charlotte has strong market demand for data science innovations. UNC Charlotte has been investing heavily in its Data Science Initiative, and enthusiasm for the program continues to be reflected in the annual growth and sell-out of conference events like this.