Human generated big data

Human generated content is comprised of all the files and e-mails that we create every day, all the presentations, word processing documents, spread sheets, audio files and other documents our employers ask us to produce hour-by-hour. These are the files that take up the vast majority of digital storage space in most organisations—they are kept for significant amounts of time and they have huge amounts of metadata associated with them.

Human generated content is huge, and its metadata is even bigger. Metadata is the information about a file: who might have created it, what type of file it is, what folder it is stored in, who has been reading it and who has access to it. The content and metadata together make up human generated big data.

The problem is that most of us, meaning organisations and governments, are not yet equipped with the tools to exploit human generated big data. The conclusion of a recent survey of over 1000 Internet experts and other Internet users, published by the Pew Research Centre and the Imagining the Internet Center at Elon University, is that the world may not be ready to properly handle and understand Big Data.

These experts have come to the conclusion that the huge quantities of data, which they term “digital exhaust,” which will be created by the year 2020 could very well enhance productivity, improve organisational transparency and expand the frontier of the “knowable future.” However they are concerned about whose hands this information is in and whether government or corporates will use this information wisely.

The survey found that “-¦human and machine analysis of big data could improve social, political and economic intelligence by 2020. The rise of what is known as Big Data will facilitate things like real-time forecasting of events; the development of “inferential software” that assesses data patterns to project outcomes; and the creation of algorithms for advanced correlations that enable new understanding of the world.”

Of those surveyed, 39% of the Internet experts asked agreed with the counter-argument to Big Data’s benefits, which posited that “Human and machine analysis of Big Data will cause more problems than it solves by 2020. The existence of huge data sets for analysis will engender false confidence in our predictive powers and will lead many to make significant and hurtful mistakes. Moreover, analysis of Big Data will be misused by powerful people and institutions with selfish agendas who manipulate findings to make the case for what they want.”

As one of the study’s participants, entrepreneur Bryan Trogdon put it: “Big Data is the new oil,” observing that, “-¦the companies, governments, and organisations that are able to mine this resource will have an enormous advantage over those that don’t. With speed, agility, and innovation determining the winners and losers, Big Data allows us to move from a mindset of “measure twice, cut once’ to one of “place small bets fast.'”

Jeff Jarvis, professor, and blogger said: “Media and regulators are demonizing Big Data and its supposed threat to privacy. Such moral panics have occurred often thanks to changes in technology. But the moral of the story remains: there is value to be found in this data, value in our newfound ability to share. Google’s founders have urged government regulators not to require them to quickly delete searches because, in their patterns and anomalies, they have found the ability to track the outbreak of the flu before health officials could and they believe that by similarly tracking a pandemic, millions of lives could be saved. Demonizing data, big or small, is demonising knowledge, and that is never wise.”

Sean Mead, director of analytics at Mead, Mead & Clark, Interbrand said: “Large, publicly available data sets, easier tools, wider distribution of analytics skills, and early stage artificial intelligence software will lead to a burst of economic activity and increased productivity comparable to that of the Internet and PC revolutions of the mid to late 1990s. Social movements will arise to free up access to large data repositories, to restrict the development and use of AIs, and to “liberate’ AIs.”

These are very interesting arguments and they do begin to get to the heart of the matter – which is that our data sets have grown beyond our ability to analyse and process them without sophisticated automation. We simply have to rely on technology to analyse and cope with this enormous wave of content and metadata.

Analysing human generated big data has enormous potential. More than potential, harnessing the power of metadata has become essential to manage and protect human generated content. File shares, emails, and intranets have made it so easy for end users to save and share files that organisations now have more human generated content than they can sustainably manage and protect using small data thinking. Many organisations face real problems because questions that could be answered 15 years ago on smaller, more static data sets can no longer be answered. These questions include: Where does critical data reside, who accesses it, and who should have access to it? As a consequence, IDC estimates that only half the data that should be protected is protected.

The problem is compounded with cloud based file sharing, as these services create yet another growing store of human generated content requiring management and protection—one that lies outside corporate infrastructure with different controls and management processes.

David Weinberger of Harvard University’s Berkman Center said: “We are just beginning to understand the range of problems Big Data can solve, even though it means acknowledging that we’re less unpredictable, free, madcap creatures than we’d like to think. If harnessing the power of human generated big data can make data protection and management less unpredictable, free, and madcap, organisations will be grateful.

More about

Don't miss