Data vs Insight: The Albatross Around the Marketer’s Neck

We have so much data at our fingertips. Every touch, interaction, click, email, webpage view. It all results in data. Even when we walk from one room to the next our phones are counting the steps, movement, changes in latitude and longitude. We are measured to within an inch of our lives.

Some of this data is captured and reported back to cloud based servers scattered across the globe. Some of it isn’t. But do we know? Do we care?

I was speaking with John Dobbin yesterday about the Data Paradox. We have more data than ever before, but less understanding of what to use it for. We spend our time analysing dashboards and combing through spreadsheets in search of that elusive insight. Sometimes as a marketer I feel like Coleridge’s ancient mariner:

Water, water, everywhere,
And all the boards did shrink;
Water, water, everywhere,
Nor any drop to drink.

Data visualisation goes a long way towards solving this challenge. Done well, it can bring your data to life – tell a story – and foreground important details. But with almost every visualisation I see, I am always asking myself, “why”. Why is this important? Why did a change occur? Why didn’t a change occur?

Take a look at my recent TwitterCounter graph below. It shows follower/ following counts over the last month. You can see there are a couple of spikes in terms of follower numbers. But you can also see that “following” numbers remain on an even trajectory. Just the simple act of looking at this graph reminded me of the actions that I had and had not taken over the last month. It made me check back to see what I was doing on March 7.

And on March 11, clearly I did something to arrest that growth. But the following week I was growing again. Not as steeply, but strongly.

twittercounter

Correlation vs Causation

Again the question of “why” raises its head. What I am interested in is not the correlation but the causation. At the book launch of Martin Lindstrom’s new book, Small Data, he suggested that it is the small data that drives causation and that big data shows the correlation. So with this in mind, I looked to the small things.

  • Ahead of the first spike in follower growth I started using Meet Edgar to more consistently tweet. Prior to that it was randomised and scheduled or ad hoc. It was not a function of what I was saying, but the fact that I was saying it.
  • The second spike built on the earlier week but benefited from my appearance on DisrupTV with GE’s Ganesh Bell and Constellation Research’s Guy Courtin.

While the big data revealed the trend and the results, it was the small data. The personal data. The insight, that actually revealed the causation. As Martin Lindstrom suggested, and as I have written previously, small data – the known unknowns of the marketing world – tell the story we are waiting to hear. The question is whether we are listening for a story or searching for data.

Big Data and the Trust Paradox

We have all become blasé about the information that we share on the internet. We openly tweet, share updates, create photos and post images about where we are, what we are doing and who we are with. We carry our mobile phones with us everywhere – and have become so reliant upon them that we have had to name a condition for the state of anxiety we find ourselves in when we leave our phones at home. It is “nomophobia” – literally the fear of having “no mobile”.

And just as our internet connection is “always on”, so too is our phone. And being always on, it’s always collecting, sharing and posting data about us. Even when it’s sitting “idle” in our pockets it is triangulating our position, beaming our latitude and longitude to satellites, connecting to wifi hotspots and cellular phone towers. Many of the apps that we use also collect and share our location – some are obvious like Google Maps and Facebook. Others not so. But it’s when we start using the phone, that the data really explodes.

The following infographic is now quite old, being originally published in 2010. It shows the “meta data” – the hidden data that is relayed along with every update that you make using Twitter. It’s not just the 140 characters of your message, but hundreds of additional characters that accompany your message, including your:

  • User name
  • Biography
  • Location
  • Timezone
  • Follower / following statistics.

And more. So much more.

AnatomyTweet

Trading privacy for convenience

The accepted wisdom is that users of these services are knowingly trading privacy for convenience. The reality is vastly different. After all, when using the internet, we are not working in full knowledge. In fact, our understanding of what we are doing, how much information we are revealing and where our data goes is extremely limited. And even when we choose to share location information with an app or when we accept notifications, chances are that we will forget that consent has been given. Or the context in which that consent was given will become lost in the daily grind of our busy, connected lives.

This plays well for those platforms that collect, harvest and sell the data of their users. In fact, it’s one of the business models that many startups rely upon – data collection, harvesting, sale and exploitation is the name of the main game. But there is change in the air, and we can expect that these business models will increasingly come under greater scrutiny and pressure. A 2014 an EMC poll revealed that only 27% of those surveyed were willing to trade their private information for a more convenient online experience. And over half (51%) straight out said “no”. Moreover:

The majority also believed “businesses using, trading or selling my personal data for financial gain without my knowledge or benefit” were the greatest threat to their online privacy.

These beliefs and expectations were further reinforced in the Pew Research Center’s Future of Privacy report, where “Some 55% of these respondents said “no” they do not believe that an accepted privacy-rights regime and infrastructure would be created in the coming decade”.

Yet despite an inherent and ongoing suspicion of corporations and governments, the Edelman Trust Barometer for 2016 reveals that the general sense of trust is improving. Edelman’s research describes a well educated and well-resourced segment of the population (approximately 15%) as the “informed public” – and measures trust in the wider population as well as this narrower segment. To qualify for the segment “informed public”, people must be:

  • Aged 25-64
  • College educated
  • In the top 25% of household income per age group in each country
  • Significant consumers of media and report high engagement in business news.

This also means that the “informed public” would be considered a “tech savvy” audience.

While trust has grown overall, it has accelerated faster between 2015 and 2016 in the “informed public” segment. And this is what makes this report so interesting. Despite a wide and growing concern around big data, meta data and data analytics, those who are MOST LIKELY to know and understand the use to which their data will be put, are reporting an improvement in their sense of trust.

[Tweet “Trust paradox. When an “informed public” is more likely to trust the use of its data, despite knowing the risks”]

And it is this “Trust Paradox” which offers both hope for business and a warning. For while trust has been improving, business and government is only as trusted as the last security breach or unexpected outage. The IBM/Forbes’ Fallout Report estimates that “lost revenues, downtime and the cost of restoring systems can accrue at the rate of $50,000 per minute for a minor disruption”. A prolonged problem would take an even greater toll on brand reputation and business goodwill.

The risk of a breach or outage, however, is not shrinking but growing, thanks to the proliferation of “shadow technology”, expanding supply chains and growing online activism. And as digital transformation continues to take on an ever greater role in customer experience, the potential for consumer impact and reputational damage also grows.

John Hagel suggests that as brands work towards a “trusted advisor” status, that they will have a “growing ability to shape customer purchasing behaviour”. But brands will only have this luxury while the Trust Paradox works in their favour. At present, the Edelman Trust Barometer suggests the balance of power remains with our peers. We trust them more than anyone else. And that means securing or “scaling trust” (using John Hagel’s terms) remains our real challenge in the years ahead.

Periscope Captures from the ADMA Global Forum

Live streaming using apps like Periscope and Meerkat have revolutionised conferences of all kinds. No matter whether you are hosting a small lunchtime gathering of friends, colleagues or experts, or attending a massive conference, these easy-to-use apps allow you to share what you are seeing with the wider world. Or at least those who follow you on social media.

With this in mind, I thought I’d put both Meerkat and Periscope through their paces at the ADMA Global Forum. The ADMA Global Forum brings the world’s leading data driven marketers together to share insights, best and emerging practices, case studies and strategies. This year, there is strong representation from technology firms with good stories to tell. Oracle, IBM and Marketo are represented. Facebook is too. I also dropped by the stands of IVE, Minfo, and others. This year is an improvement on last, but there’s more work to be done on their exhibition stands and their ability to talk to marketers on their own terms. If you have a tech brand needing to talk “marketing”, then maybe we should talk too.

I live streamed and recorded a number of sessions and embedded them below. Tomorrow I will go deeper. Do some interviews. Chat with the teams on the exhibition stands and some of the audience. Let me know what you’re interested in and I will see who I can talk to.

@AndyVen from TheOutNet

Marketing Automation with @Missguided

Case study from @Missguided

Case study from Bosch

Case study from Regions Bank

Content Strategy from @TheOutNet

Case study from Getty Images

Personalisation strategy from Sitecore ANZ

Five Insights into the Psychology of Twitter

Statistics and sampling are an amazing thing. Even if, like me, you have a healthy scepticism about the way that data is analysed and interpreted, it is difficult – if not foolhardy – to downplay the inevitability of data. Just look at the various disputes around the veracity of climate change – where statistically irrelevant interpretations have derailed important decisions, changes and commitments. Eventually, even the hardiest data curmudgeon will need to yield to the truth of the climate science data – perhaps only as their seaside apartment is swept into the arms of the sea. For though there may be outliers and anomalies in the data, sampling – where carried out correctly – can yield tremendously accurate insight. As Margaret Rouse explains on the TechTarget website:

Sampling allows data scientists, predictive modelers and other data analysts to work with a small, manageable amount of data in order to build and run analytical models more quickly, while still producing accurate findings. Sampling can be particularly useful with data sets that are too large to efficiently analyze in full — for example, in big data analytics applications.

And it is sampling that makes Twitter one of the more fascinating social networks and big data stores of our time. While Facebook grows its membership into the billions, its underlying data store, its connection and interaction architecture and its focus on first tier networks also limits its capacity to operate efficiently as a news source and distribution network. Twitter on the other hand, with its 200+ million members, provides a different and more expansive member engagement model.

During our recent forum presentations on the voice of the customer, Twitter’s Fred Funke explained the view that Twitter was “the pulse of the planet”. Using tools as simple as Twitter search or Trending Topics, Twitter users can quickly identify topics that important to them – or to the broader local, regional and global communities. And, of course, with the new IBM-Twitter partnership, there are a raft of tools that allow businesses to go much deeper into these trends and topics.

In doing so, however, we have to ask. What are we looking for? What information will create a new insight? Which data points will reveal a behaviour? And how can this be framed in a way that is useful?

Five Buyer Insights that Drive Engagement

Just because interactions are taking place online doesn’t mean that they occur in isolation. In fact, our online and offline personalities are intricately linked. And as the majority of our digital interactions take place via text, linguistic analysis will reveal not only the meaning of our words but also our intention. Some things to look out for and understand include:

  1. Buying is an impulse: As much as the economists would like to believe we act logically, we know that buyers are emotional creatures. We buy on whim. On appeal. On impulse. And there is no greater impulse these days to share an experience (good or bad) via Twitter. Look particularly at the stream for comments tagged with #fail. It is full of opportunity for the responsive marketer keen to pick up a churning customer having a bad customer experience.
  2. The customer journey is visible: While we are researching our next purchase, digital consumers leave a trail of digital breadcrumbs that can be spotted using analytics software. For example, we may tweet out links of videos that we are viewing on YouTube, share blog posts related to our pre-purchase research and even ask directly whether a particular product lives up to the hype. Just take a look at the #lazyweb stream around the topic of Windows10.
  3. Understand the pain to optimise the opportunity: When engaging via social media, it is important to understand the challenges or “pain points” that your customers (or potential customers) are facing. Rather than spruiking the benefits of your own products, focusing on an empathetic understanding of your customer’s needs more quickly builds trust and is grounded in a sense of reality. The opportunity with social media is to guide the journey, not short cut it.
  4. Case studies build vital social proof: No one wants to be the first to try your new product. Showing that the path to customer satisfaction is well worn is vital. Use case studies to pave the way.
  5. We buy in herds: Mark Earls was right. Not only do we want social proof, we prefer that proof to reflect on our own sense of belonging to a group or movement. Remember that we go where the other cows go, and structure your social media interactions accordingly.

The folks over at eLearners.com have put together this infographic on the psychology of Twitter. They suggest that we tweet for love, affection and belonging. It may be true, but sometimes we just also want to vent. And every vent is a market opportunity.

psychology of twitter

How IBM and AusOpen Tennis are rethinking the “sportacular”

The spectacle is not a collection of images, but a social relation among people, mediated by images.
— Guy DeBord, Society of the Spectacle

One of the most transformative trends of the last decade has been the shift from inside-out to outside-in thinking. It can be applied to almost any industry or area of expertise. Think, for example, about technology innovation. Up until recently, new ideas and inventions were the province of internal business and technology teams. Research and development funds and resources would be sunk into various teams and programs – some official and others operating in the shadows. Eventually these ideas would emerge as new products. As innovation made flesh (or wires or software). But over the last decade this has shifted. With more open platforms like Salesforce1, the growth of developer communities for software platforms from SAP to Marketo, Jive to Atlassian, innovation has breached the firewall and has taken on a life of its own.

The combination of open-ish platforms and abundant data is creating new opportunities for businesses and their customers alike. But the benefits are not limited to an in-your-face direct-to-your-phone coupon or BOGOF offer. By combining technology and matching it to human behaviour, we are beginning to explore a whole new world of experiences. And the testing ground for this innovation is not a lab in the middle of the Nevada Desert, but right here, at one of Australia’s largest sporting events – the Australian Open Tennis.

IBM and the Australian Open Tennis have been partners for decades. Many years ago, my extended team worked on IBM’s large scale sports events like the Olympics, NBA, Super Bowl and the Australian Open – so I had the opportunity to see – from the inside – how the technology and services were brought together. There has been a lot of water under the bridge in that time, so when I was invited to go “behind the scenes” and learn how things had changed, I jumped at the chance.

As has always been the case, the technology and services that IBM provides are deeply embedded in the running of the event.  But it’s not just the outward facing systems and technology. These days, the technology and business process integration taps into the way that the grand slam tournament is run and reported – it’s “hard wired” or should I say “wirelessly” into the fabric of the business, event management and fan experience. This partnership stretches from the electronic impulses of the web to the real time provisioning of servers, to the deployment of onsite security personnel, management of player timetables and grand slam rules and the amplification of fan experiences via social media.

Big Data on the Centre Court

Ian Wong from IBM’s interactive experiences team stepped the technology through its paces, revealing court data for each of the games. There were enough high level statistics to excite the armchair observer – from fastest serve to win ratios and more. All of this data was being captured by a combination of on-court technology and human observation. The umpire’s scoring PDA  fed directly into the system, managing scoring, counting strokes and helping to keep the games moving to time. High up in the stands were also other observers who were cross-checking and monitoring the on-court action.

2015-01-30 15.56.13 2015-01-30 16.00.36 2015-01-30 15.58.43

The data capture sounds impressive – to a point – but the big data activity underlying is where it becomes fascinating. Each game, each point, each stroke is captured and stored. This information is then processed and analysed – and builds upon the same data going back over eight years. This massive data store allows the IBM team and Australian Open to profile players and particular matches and to use a matching algorithm to unlock the “keys to the match”.

For example, say Martina Hingis was playing Serena Williams. The system could pull all previous match data and reveal aspects of match play that each had to achieve in order to win. The “keys to the match” could be “successful first serve 76% of the time” or “serving to the opponent’s backhand 65% of the time for second serves” and so on. Interestingly, this information and analytics power is made available to coaches and players through interactive panels – though not to the general public.

From off-court big data to real world big data

While the spectacle of on-court action is the drawcard, in true “Society of the Spectacle” form, a just as important aspect of the Australian Open Tennis is the fan experience. Tennis Australia CIO, Samir Mahir, has put together a massive sense and respond network that not only connects fans – it responds to their needs – matching expectation and demand with services and resources just-in-time. As Samir explained, “it’s about getting more people to come to the Tennis”.

Ingeniously, providing free wifi for all attendees means that fans are always connected – with a fat pipe of data ready and available at all times. AusOpen have established “Selfie Zones” around the stadium where people are encouraged to take photos and share via Instagram, Twitter and Facebook. And when tagged, can be displayed on the big screens on the centre court for all to see.

But this is just the start of the big data story. As people move around the stadium, routers count the number of connections to wifi points of presence. This data is then fed in real time to the logistics engines which can determine whether there are sufficient security personnel, adequate ticketing booths open and available or enough food and drink vendors in the right location. If necessary, food and drink or security personnel are notified and redeployed to ensure levels of service and safety are maintained across the massive complex.

Similarly, social media is monitored for volume and velocity. This information is then crunched by IBM’s Watson natural language processor. Realising that an increase in social media activity creates increased demand on the AusOpen website, IBM is then able to use social media velocity to predict and trigger on-demand webserver provisioning.

From sport to sportacular

By placing the fan – the consumer of sport – at the centre of the tennis experience, IBM and the Australian Open are transforming what was once a spectator event to a “sportacular”. They are combining and showcasing the gladiatorial contest on court, the preparations on the practise courts and an experience for fans that connects the location and their own experience in an event that is part of a global grand slam series.

In an instant these messages, are captured, multiplied, beamed around the world and brought back to the individual one selfie at a time.

The basically tautological character of the spectacle flows from the simple fact that its means are simultaneously its ends. It is the sun which never sets over the empire of modern passivity. It covers the entire surface of the world and bathes endlessly in its own glory.
— De Bord, Society of the Spectacle.

Note: travel and accommodation were courtesy of IBM.

 

Data is Eating Marketing: Digital, Social and Mobile in 2015

Data. It’s out there. And there is plenty of it. We create data with every status update, photo shared or website viewed. Each search we make is being monitored, sorted, indexed and analysed. Every purchase we make is being correlated, cross-matched and fed into supply chain systems. And every phone call we make is being logged, kept, passed on for “security purposes”.

There are so many kinds of data that it is hard to keep up with it all. There is the data that we know about – the digital items we intentionally create. There are digital items that are published – like books, websites and so on. There is email which creates its own little fiefdom of data.

There is also the data about data – metadata – which describes the data that we create. Take, for example, a simple Tweet. It is restricted to 140 characters. That is the “data” part. But the metadata attached to EACH and EVERY tweet includes information like:

  • Your location at the time of tweeting (ie latitude and longitude)
  • The device you used to send the tweet (eg phone, PC etc)
  • The time of your tweet
  • The unique ID of the tweet.

But wait, there’s more. From the Twitter API, you can also find out a whole lot more, including:

  • Link details contained within the tweet
  • Hashtags used
  • Mini-profiles of anyone that you mention in your tweet
  • Direct link information to any photos shared in your tweet

There will also be information related to:

  • You
  • Your bio / profile
  • Your avatar, banner and Twitter home page
  • Your location
  • Your last tweet.

There is more. But the point really is not about Twitter. It is the fact that a seemingly innocuous act is generating far more data than you might assume. The same metadata rules apply to other social networks. It could be Facebook. Or LinkedIn. It applies to every website you visit, each transaction you make. Every cake you bake. Every night you stay (you see where I am going, right?)

For marketers, this data abundance is brilliant, but also a distraction. We could, quite possibly, spend all our time looking at data and not talking to customers. Would this be a bad thing? I’d like to think so.

The question we must ask ourselves is “who is eating whom?”.

In the meantime, for those who must have the latest stats – We Are Social, Singapore’s massive compendium is just what you need. Binge away.

Tell the Story of Your #BigData with QuillEngage

Big data, small data, analytics. Blah blah blah.

It all sounds like a load of waffle, right? At least until we find a thread of narrative running through the information.

For many of us, the closest we are likely to get to a large amount of usable data is on our own websites. And believe it or not, even small amounts of web traffic, visits, comments on a blog etc can generate a substantial amount of information. If you get a chance, log in to your server and download the “log files”. You’ll soon see just how much information is generated by visitors to your website.

The thing is, raw log files are relatively useless. Sure they might help your webmaster pinpoint a problem or recurring error, but thousands of lines of information only make sense in aggregation. Or when they are decoded. Translated.

And that’s where Quill Engage comes in. You simply sign up, provide access to your Google Analytics information and then each week, QuillEngage will email a report explaining what’s going on with your website.

Now, I have been a fan of web analytics before there were web analytics. I have created my own reports, created simple tracking systems to collate conversion data and so on, but in a world where there is ever increasing pressure on our decision making capabilities, handing off the data processing tasks to an artificial intelligence engine may be the smartest thing you can do. Especially when it produces not just a report, but insight.

QuillEngage1

My latest report highlighted a few important things to think about:

  • My mobile traffic was up 3% over last month and now accounts for 19% of total traffic
  • The most visits to Servantofchaos.com come from Sydney and NSW (which is big change over previous years)
  • Facebook replaced Twitter as my top social network referrer, up a massive 250% on the previous month

Why is this important?

Well, these days I hardly have time to write let alone check the performance of the site. But if I do check, I am unlikely to connect all the dots in this one email report in under 10 minutes. In fact, the email report from QuillEngage is so quick to read and easy to consume that you’ll be using those 10 minutes to think about what you might do differently next month.

And that’s really the point.

From what I can see, QuillEngage is a no-brainer for any business owner or marketer. Sure, you’re not going to get the detail that comes straight from Google Analytics, but the report should give you some quick thoughts on what to interrogate and act upon. And it’s free, at least for the time being. Get started here.

When Big and Data got together, it was love at first Like

Breathless. Heart beating. We all know the feeling. It’s all heart, feeling, emotion. We’re waiting for the brain to kick in – but there is no relief. It’s really a sign of madness.

Love is merely a madness: and, I tell you, deserves as
as well a dark house and a whip, as madmen do: and the
reason why they are not so punished and cured, is, that
the lunacy is so ordinary, that the whippers are in love too.
— Shakespeare, As You Like It, 3.2

But these days, meeting and falling in love is not just a physical thing. It’s virtual … and played out on social networks.

Facebook-Love

The Facebook data science team has been digging through the mountains of interactions that take place between people before, during and after they fall in love. They looked in detail at the number of posts exchanged going back to 100 days before the “couple” changed their relationship status from “single”. What they found was that social media interaction plays an important role in the formation of the relationship:

When the relationship starts (“day 0”), posts begin to decrease. We observe a peak of 1.67 posts per day 12 days before the relationship begins, and a lowest point of 1.53 posts per day 85 days into the relationship. Presumably, couples decide to spend more time together, courtship is off, and online interactions give way to more interactions in the physical world.

And this is where big data gets interesting. We are now starting to see digital traces of behaviours that have real world impacts. The things that we do and say online can be correlated across thousands of data points to reveal actions that take place in our so-called “real lives”. But where does it go from here?

  • Social lifestyle mapping: Facebook (and other collectors of big data) can map and improve personas, track shifts and changes in community trends and lifestyles over time
  • Predictive targeting: With social lifestyle mapping in place, algorithms can be used to predictively target individuals and groups with relevant information. This could take the form of advertising, public health messaging/recommendations, career suggestions and so on. In fact, the possibilities are endless
  • Location awareness: As a large number of Facebook interactions take place on mobile devices, location awareness can add a greater degree of relevance to any of these predictive or realtime offers.

High level barriers:

There are some immediate barriers to usefulness that spring to mind:

  • Brands are slow to catch and embrace technology innovation: Facebook (and indeed Google) have a great deal of work ahead to prepare brands and governments for the power and opportunity that this presents. Thus far we’ve seen precious little in the way of focused education and leadership in this area and without it, organisations simply won’t be prepared (or interested) in this
  • Organisations lag in digital transformation: For these opportunities to be embraced, most organisations have to undertake digital transformation activities. Ranging from change management and education to strategy, business system overhauls and process improvement, digital transformation is the only way to unlock organisation-wide value – but few are seriously committed to such a program
  • Privacy is shaping up as a contested business battleground: Many governments, corporations and individuals fervently hang on to notions of pre-internet era privacy. Laws and regulations have struggled to keep pace with the changes taking place in our online behaviours. Meanwhile public and private organisations are conflicted in their use of, protection and interest in privacy. We’ll need to work through this to understand whether privacy really is dead.

Love´s in the air!! Muuuitos corações!!!! erika k via Compfight

Brand Storytelling: Teradata’s Case of the Tainted Lasagna

Brand storytelling can be hard work. Not only are there all the internal hurdles to overcome, sign-offs and legal checks and so on – there is also the challenge of subject matter. What do you do if you have a complex product or solution that you are trying to explain? Which channels do you choose – and how do you incorporate social media into the mix.

I was recently speaking with a financial services industry CEO who lamented that they have the most boring product in the world. He couldn’t see how it would resonate with a social media-savvy audience.

But social media is not broadcast – especially in B2B (business-to-business) marketing. You’re not trying to reach and engage millions of people – you are (or should be) focused on the buyer’s journey and helping to ease your customer’s decision making process. That means selecting the most appropriate channel – and delivering content that provides very specific value to your customer at their point of need. And brand storytelling can form a very powerful component of your content strategy and lead nurturing program.

Still unsure of how this might work for you and your brand?

Enterprise software vendor, Teradata, have been experimenting with brand storytelling for some time and have taken a novel approach that you may want to steal (I mean “learn from”). Tapping into pop culture’s interest in forensic analysis (a la CSI), they have created a series of videos that take a new approach to case studies and product/solution brochures. The “Business Scenario Investigations” or “BSI” team dramatize business problems and then showcase how technology can be used to “solve” the problem.

http://www.youtube.com/watch?v=BaXpsNATecc

Each of their videos can be found on the BSI: Teradata Facebook page as well as the YouTube channel. They cleverly provide a powerpoint version of the scenario via Slideshare and share the storyboarding process from problem definition to casting through to resolution.  And while the case of the tainted lasagna may not be to your taste, it’s likely to be very appealing to those CIOs and CMOs wanting to understand how data can transform their businesses. And that’s tasty. Very tasty indeed.

51: CSI: Investigates! Kit via Compfight