Originally posted at Living Cities.
By Will Cook.
“Big data” and social media analytics get a lot of press, so it isn’t surprising that there has also been a lot of talk around how these tools can help city governments. However, cases of cities getting actionable intelligence on citizen sentiment from social media are still few and far between.
There are multiple reasons for this, procurement challenges clearly being among them, but there are also issues inherent to scraping social media.
Pulling city-related posts from a platform like Twitter is achievable with relatively little effort assuming consistent hashtags or other identifiers. Absent a uniform label, entity resolution, or resolving different names for the same object, can become a problem. For example, how does a system identify “Golden Gate Park,” “GG Park,” and “golden-gate-park” as the same place? The process for making such determinations often comes down to a choice between inaccurate automated solutions and resource-intensive human classification.
Determining how to effectively deal with unstructured content is another common stumbling block. User comments need to be qualified (positive, negative, or something in between) in some way, and even manual review procedures are not foolproof in this regard. Grade DC, one of the few platforms actively scraping social media for citizen feedback, relies on a manual review process to qualify citizen comments. And even here, the city has had to internal processes to deal with the occasional misclassification.
The vendor-driven model leveraged by Washington DC would also price most small and medium sized cities out of the market. A more realistic option for smaller cities is likely full automation through an open source solution, but this only increases the challenges associated with data integration and comment qualification.
Such challenges aren’t unique to public sector data analysis. IBM Research team member Scott Schneider has pointed to the same challenges in the company’s work, including as part of a project to help a major film studio gauge online sentiment towards an upcoming release. Misclassified comments included things like sarcasm. It only takes a small amount of film knowledge and pop culture awareness to understand that a comment like “amazing, the best film since #PlanNineFromOuterSpace” is an unflattering review. However, most algorithms for automated classification, and even manual reviewers without the right knowledge of context, will miss the message’s intent.
The IBM team was not that concerned with these types of outliers on their film studio project. Schneider points out that positive and negative corner cases will often net out when rolled up into a broader classification. Google has taken a similar approach with their Fusion Tables product, working to obtain answers that are good enough to meet client needs without spending too much time on the semantics of things like fuzzy match (the automated linking of similar but non-matching pieces of text).
Nonetheless, progress is being made in solving the harder integration and classification challenges. Academics like Andrew McCallum from University of Massachusetts have completed studies using human feedback as data in classification algorithms. Here, each piece of feedback provided by a human reviewer is treated as a new piece of data by automated processes, thereby helping the system to learn more effective classification techniques as it runs and obtains additional human feedback. Such processes may ultimately represent the best compromise between inaccurate automated processes and accurate but labor-intensive reviews.
Applying these same assumptions back to the public sector, mining techniques currently available could produce satisfactory back of the envelope calculations to broadly measure citizen sentiment. Current approaches are less likely to provide the quality of information necessary to allow governments to automatically route, and respond to, individual citizen complaints and questions. Hopefully, however, the kinetic interest in big data and social media will soon translate to better, cheaper, and easier to use tools.
* * *
Will Cook is a writer on civic innovation and technology, and a graduate of the Harvard Kennedy School. He has worked with the U.S. Department of Labor on open government initiatives and with the U.S. Treasury Department’s Middle East and North Africa office on regional economic development. Prior to the Kennedy School, Will worked in Lebanon with UNRWA and development NGOs on educational advancement for refugee populations, and in Chicago with Ernst & Young’s Advisory practice on technology and governance projects.
This piece is cross posted from the Data-Smart City Solutions blog hosted by the Ash Center for Democratic Governance and Innovation at Harvard Kennedy School.
For more on mining social media and other sources for a better understanding of government performance, check out this video interview with Chris Murphy, Chief of Staff to Mayor Gray in Washington, DC. He discusses GradeDC, an innovative, data-driven initiative.