How to Create a Great Place to Work by Mike Michalowicz


Advertisements

Don Springer, CEO of Collective Intellect, on Integrated Social Business & the Future of Social Media Metrics | The Social Customer


Don Springer, CEO of Collective Intellect, on Integrated Social Business & the Future of Social Media Metrics

comments       Posted February 22, 2011 by Jennifer Roberts

http://platform0.twitter.com/widgets/tweet_button.html?_=1298532578124&count=horizontal&lang=en&text=Don%20Springer%2C%20CEO%20of%20Collective%20Intellect%2C%20on%20Integrated%20Social%20Business%20%26%20the%20Future%20of%20Social%20…%20&url=http%3A%2F%2Fthesocialcustomer.com%2Fjennifer-roberts%2F34705%2Fdon-springer-ceo-collective-intellect-integrated-social-business-future-socia Share5

This week as part of our executive series, I talked with CEO and one of the original founders of Collective Intellect, Don Springer. He provided his insight on the primary drivers in social media and text analytics for 2011, the biggest challenges enterprises are facing as they rollout integrated social business processes and the future of metrics.

How is Social Media challenging the way businesses operate today?

Well, let’s take a look at some figures. Industry research estimates that 127 million people, or 57.5% of Internet users, visited a social networking site at least once a month in 2010.  The steady rise since 2009 is attributed to the ever-increasing popularity of Facebook. Not only is the number of users growing quickly, but also the audience demographics continue to widen.  In 2010, it is estimated that 59.2% of adult Internet users will visit social networks monthly, up from 52.4% in 2009.

And as individuals adopt and begin using social media to exchange recommendations, referrals and opinions – all of these conversations translate into an enormous volume of data.  The growth of unstructured data is expected to grow with estimates pegging the compound annual growth rate at 62% from 2008-2012.

What this means from a business or market perspective is a seismic shift from one-to-many marketing to the considerably more-complex social media engagement. This represents a changing of the guard from traditional marketing techniques to a personal, identifiable marketing effort. Social media giants, like Facebook, LinkedIn and Twitter provide the platform for the conversation, or word-of-mouth recommendation, which ultimately influences purchase behavior. The interactive and multi-directional nature of these social channels has given customers ownership of the conversation, which companies are increasingly recognizing.

Yet for many businesses, they are completely unaware these social conversations about their brand, products or industry are going on. All of these insights and user-generated content are simply not analyzed or reviewed to help inform business process or outreach programs. Social media analysis is real-time, publicly available information that can be categorized and filtered and is the type of data that really lends itself to open-ended white spaces analysis. Organizations can use a sophisticated listening platform to find out what is truly important to consumers; insights that are so unexpected or surprising, an organization may not have even known or realized these types of brand association or product uses were going on.

So, how can companies get started monitoring and using social media data?

We’ve put together a social media maturity curve both to help companies identify where they are in the adoption and integration of social media analytics but also as a guide for becoming a social business.

Click image to enlarge

 

It somewhat depends on the industry or business function but for the organization as a whole the act of “Listening” to social media conversations, to begin to collect basic insights around brand and product awareness is critical to remain relevant and competitive. But to use an example, let’s take a look at a market research company. They may begin to use social media insights to validate findings from traditional research. More interestingly though is when they apply our semantic analytics engine to data and begin to dispel misconceptions, or surface unexpected insights. Once a company begins to approach social media analytics or text mining in strategic manner, they can begin to define how and what social media analytics they want to gather, then this type of analysis can be set up in a repeatable and consistent fashion. This data stream can be fed into a database and used to populate dashboards or other business process workflow systems.

And really that’s where the true power of social media can be realized, when companies begin to take an integrated, 360-degree view of their customers. Because it’s not only listening to and engaging in social conversations that creates or defines the value of the relationship, it’s capturing those exchanges and marrying it to other data points so that companies are reaching out to customers at the right time, where they are and with the appropriate message.

Where are most of CI’s clients on the maturity curve?

They are in step 3, Social Research, using blended qualitative and quantitative reports to inform strategy, and then converting insights from those reports into real-time, always on dashboards.  This is the first phase of really unleashing the power of social analytics mentioned above – getting data flowing to make informed decisions around marketing, sales,  and customer services tactics to course correct activity and change ROI.

Our more savvy customers are building on this to enter into Social Targeting, Social CRM and Social BPM with a variety of initiatives this year. Companies are no longer just performing ad-hoc listening. The enterprise is moving towards the need to have repeatable, scalable and consistent social media analytics fed into a data stream and then manifested in a dashboard.

At this point in a company’s adoption and integration of social media analytics, we usually work with a governance committee comprising PR, brand managers, and customer service/relationship marketing interests.  And the questions they are asking are how do we invest in an enterprise listening platform that becomes the data driver to integrate into our other systems? How do we append data in a CRM database or transactional repository, where each conversation is defined as a transaction and fed into a business process management system?

We are seeing IT beginning to get involved. And they are looking at a variety of solutions; some are hosted in the cloud,  others need integration at the data level; some install our software or use our solution for private data, like processing private confidential records. They want to install our software in their database and they want and need to keep control.

So, we are seeing this whole evolution at the enterprise level where it’s not longer a single group doing their own thing, with no managed workflow and where the customer engagement is only as good as the individual doing the engaging.

This is exactly where we think the industry and the adoption of both social media analytics and tools is going. Soon social media analytics will not be a stand-alone metric, data will flow everywhere and it will become a data management question. Companies effectively managing data to optimize social business collaboration will be able to develop and nurture the lifecycle value of their customers and their customer’s social network.

The benefits of social media analytics and increasingly integration are becoming more apparent. Why do some companies struggle to adopt social media?

Much of the integration difficulty can be attributed to the lack of management support and confidentiality concerns that seem to prevent full adoption of social media within the enterprise. According to a recent market research survey, 33% of the respondents stated they were not the decision-makers and 14.7% stated “management resistance” was an obstacle to social media adoption. To add to this, 33.1% pointed to “confidentially issues” as a reason for not adopting social media.

One key reason that management resists is that social media investment lacks transparency, with less than 15% of companies that use social media measuring return on investment and over 33% not measuring return on investment at all. It seems that many companies struggle with identifying what to measure, how to measure and how to interpret the data when they are able to gather results.  This lack of transparency and measurable success ultimately leads to decreased confidence about a company’s social media strategy.  As the ROI discussion evolves and more companies are actively pursuing a social media strategy, the need for measuring and analytics will become more prevalent.

The Future of Metrics is Outcomes

Companies are not only looking at brand awareness metrics and sentiment but at consumer conversations at the category level, which can be used for benchmarking and competitive comparisons. This type of metric is much more sophisticated than sentiment analysis because it’s about outcomes – “are we fixing the problem for our customer”.

For example, call center volume is decreasing as individuals use social media tools to find answer themselves or ask their friends for help. They are canvassing people they trust and who are willingly sharing their knowledge and doing this sometimes while they are on hold with a customer service agent.

The answers a customer is searching for may not be coming directly from the company.  Companies would prefer their solution, their answer was referenced either by an individual within a customer’s social network or via a direct engagement with the customer.  The metrics worth tracking and that can be validated by social media analytics are “did we fix the problem”; “are our solutions referenced and amplified by the social network”;  “if so by whom”;  “if not, who is providing the support”. These are very different indicators than simply sentiment or activity volume.

Savvy organizations have BPM solutions/managed workflow in place to track and manage where the complaint initiated and where it ultimately was resolved regardless of platform. From a content perspective they are creating links to collateral – digital libraries – organized based on frequency and amplification measurements  – so that they are promoting their brand’s perspective on the issues.

Are creative people less likely to become leaders?


Excerpt from the recent article “A Bias against ‘Quirky’? Why Creative People Can Lose Out on Leadership Positions” in Knowldge@Wharton.

“Creativity is good — and more critical than ever in business. So why do so many once-creative companies get bogged down over time, with continuous innovation the exception and not the norm? Wharton management professor Jennifer Mueller and colleagues from Cornell University and the Indian School of Business have gained critical insight into why.

In a paper titled, “Recognizing Creative Leadership: Can Creative Idea Expression Negatively Relate to Perceptions of Leadership Potential?” to be published in the March 2011 issue of the Journal of Experimental Social Psychology, Mueller and co-authors Jack A. Goncalo of Cornell and Dishan Kamdar of ISB undertook three studies to examine how creative people were viewed by colleagues. The troubling finding: Those individuals who expressed more creative ideas were viewed as having less, not more, leadership potential. The exception, they found, was when people were specifically told to focus on charismatic leaders. In that case, creative types fared better. But the bottom line is that, in most cases, being creative seems to put people at a disadvantage for climbing the corporate ladder. “It is not easy to select creative leaders,” says Mueller. ‘It takes more time and effort to recognize a creative leader than we might have previously thought.'”

If this study is correct, could this be the reason why innovation in many companies has yet to reach to secior executive levels. Or has the study missed important points. What is your opinion and have you ever met a truly creative senior executive? Would Apple’s Steve Jobs from the viewpoint of the studybe the exception rather than the norm?

Have your say here?

 

Four Roles for Your Innovation Team by Tim Kastelle


Four Roles for Your Innovation Team

by Tim Kastelle

Four Roles for Your Innovation TeamHere’s a persistent innovation management question: is it better to have a dedicated team responsible for innovation, or should this responsibility be distributed throughout your entire organization? The best answer depends on your circumstances. But if you set up a dedicated team, it’s important to consider what role you want them to play. There are four different roles that a dedicated innovation team can fill.

One of the organizations that John and I do quite a bit of work with has a new internal group that’s been set up to try to help facilitate the identification and execution of innovations that will have a longer-term impact on performance. Prior to this, they had been responsible for facilitating all innovation throughout the organization. In this new configuration, a different group is responsible for helping incremental innovations. The longer-term group, which includes all the people that we’ve been working with over the years is supposed to be looking at “emerging opportunities.”

Over the past few months I’ve been working with them to try to figure out what their business model should be. As we talked things through, we realized that there were really four different roles that they could try to fill. This is what they are, in order of increasing levels of resource commitment:

  1. Information Facilitation: this is essentially the role they used to have before the restructure. When you do information facilitation, you find information about innovation, and distribute this to people that are generating ideas. This will help them figure out how to best execute the new ideas. In this role you can also work on developing processes and infrastructure that support all parts of the innovation process. This type of group is most active in supporting idea generation.
  • Opportunity Consultant: a group doing this will do everything that an Information Facilitation team does, but they will take a more active role in selecting ideas. They work to ensure that the ideas that are pursued connect with the organization’s overall strategy. In this role you work on developing the best possible set of criteria for evaluating ideas, particularly for fit with objectives.
  • Opportunity Enabler: this type of group goes one step further – they work to connect ideas with those that have the resources to execute them. Enabling collaboration is a big part of this role – you need a group in this role if you are pursuing an open innovation strategy. This type of team will also work on developing implementation plans, and trying to quantify outcomes and learnings from new initiatives. Opportunity enablers are active in supporting all steps in the innovation process – idea generation, selection, testing and diffusion.
  • Execution Delivery: this is the most active role you can have – this is a group that doesn’t just support the innovation process, they actually undertake all the steps. Most R&D groups fall into this category.
  • It pays to think about this taxonomy for a few important reasons:

    • Upper management often thinks that they are setting up an Execution Delivery group, but only puts together a group with sufficient skills and resources to successfully fill one of the less intensive roles. You can’t set up an innovation group, with responsibility for innovating, without also provided the resources that are required to do this. If you have limited resources (or limited commitment), it is better to acknowledge up front that your new team will be Opportunity Consultants or Enablers. Or even Information Facilitators. The more clear you are about the group’s objectives, the more likely it is that they will be successful. And the objectives have to align with the resources.
  • The skills that you need to fulfill each role increase substantially as you move up the list. This is one of the issues that the team we’re working with faces – they started out as Information Facilitators, but in their new role will only deliver value to the organization if they are able to be Opportunity Enablers. This requires a different set of skills. Fortunately, the group is very bright, and quick learners – so they may well be able to build these skills. But again, you have to think about what skills are required up front.
  • Because the skill requirements are different, don’t expect one group to fill more than one or at most two of these roles. To some extent the lower-level activities are included as you move up the ladder, but not entirely. If you need to have all four roles filled within your organization, you probably need to have more than one group working to support innovation. Or you at least need to have responsibility for these different roles clearly assigned to different people within one large team.
  • Using specialist teams to support innovation is a really good idea. However, in order for them to be successful, you need to be clear at the start about which role you want them to fulfill. Each one requires different skills, and different levels of resourcing. If you want a high-performing Execution Delivery team, you need to resource it appropriately.

    If you don’t don’t need a full delivery team, or if you don’t have the resources or commitment to supporting one, then you need to scale back expectations. It’s a question of figuring out which role best supports your overall strategy. That’s how you work through your ball of creative mess.

    Editor’s Note: If you enjoyed this article you will also enjoy The Nine Innovation Roles by Braden Kelley

    Join the innovation community

    Don’t miss an article (2,250+) – Subscribe to our RSS feed and join our Innovation Excellence group!


    Tim KastelleTim Kastelle is a Lecturer in Innovation Management in the University of Queensland Business School. He blogs about innovation at the Innovation Leadership Network.

    Measuring Performance with the Tech Transfer Health Index by Melba Kurman


    Measuring Performance with the Tech Transfer Health Index

    by Melba Kurman

    Measuring Performance with the Tech Transfer Health IndexThe “tech transfer health index” is a simple but powerful technique to quantify the impact and productivity of the entire long tail curve of technologies in a university’s IP portfolio. Here’s why we should adopt it. When I worked in a university technology transfer office, we spent a lot of time pulling together performance metrics. We had 14 different reports, each with its own subtle nuances and unique methodologies. Needless to say, despite our best efforts, our metrics didn’t reconcile well over time and unintentionally gave the impression that our tech transfer office was somewhat, uh, creative in our accounting. The problem, however, wasn’t just accuracy.

    Our metrics missed the mark because they didn’t reflect the whole story: we counted mostly technology activity in the head of the long tail curve of distribution – the high-earning technologies, new startups, and issued patents. However, most staff time was spent managing “tail” technologies – filing provisional patents, marketing technologies, keeping on top of licensees who weren’t paying their bills, putting on events, and processing all types of agreement-related paperwork. Another limitation of our approach was that we counted all commercial licenses the same way, regardless of their associated impact or revenue (of course revenue is not a perfect proxy for impact, but lumping together anything with a signature on it created a meaningless and distorted depiction of our performance). Finally, we tallied metrics in our own, idiosyncratic way that was hard to explain to outsiders, so even our AUTM metrics could not be easily compared to those from a different tech transfer office.

    Enter the tech transfer health index. I got the idea to create a tech transfer health index in a conversation with a faculty friend. I was describing the university commercialization RFI responses I’ve been reading. A common theme amongst responding universities is their quest for for performance measures that would 1) focus on more than just revenue from ”big hits” 2) better convey the activity of their entire set of active licenses from the high earners all the way down the tail, and 3) indicate the large amounts of invisible and unheralded staff time and labor that’s an essential part to marketing and managing an IP portfolio. In addition, though not mentioned by university respondents, based on my experience, effective metrics should be hard for tech transfer offices to interpret in unique ways, or unintentionally “game;” watertight metrics would increase stakeholder confidence in the TTO’s transparency.

    Turns out that faculty have found a solution. Most universities now use a performance evaluation technique called the H-Index to measure the impact and productivity of their faculty’s scholarly work. The H-index is most commonly used in the context of counting the number of times a particular researcher’s papers have been cited by their peers. Before the H-index, tenure committees simply tallied up the total number of citations but did not consider their value and distribution. The H-index was created in response to flaws inherent in the traditional citation-counting method. Tenure committees discovered that (like a home run “greatest hit” technology), a researcher could claim a large number of citations, but not reveal they all came from a single paper, a “one hit wonder.” Also, (kind of like counting large numbers of provisional patents or low-value license paperwork) a scholar with a lot of citations could be basing her count off of several papers that were cited only once or twice, a sign that while she wrote a lot of papers, none of them had a significant impact on other researchers.

    Three Different Health Indexes

    The H-index can be applied to assess the health index of university IP portfolios. Calculating the tech transfer health index is easy. I’ll bet you already have data on how much revenue each patent has earned over its lifetime. Use that data for your first health index analysis to evaluate how diverse and well balanced your licensing efforts are.

    1. Dig up the spreadsheet that lists the revenue earned by each patent (patents are a cleaner data point than technologies since they’re a finite IP unit).
  • Rank the patents by the revenue they’ve generated over their lifetime from largest to smallest.
  • Make a chart with the horizontal axis for patents and the vertical axis for revenue. Plot the patents by their revenue in units of $1,000. You should quickly see a long tail curve emerge.
  • When you’re done plotting, extend a diagonal line out from the origin (where the x and y axes meet) through points (1, $1000), (2, $2000) .. (10, $10000), etc. — kind of like the straight grey line in the picture above.
  • Where your diagonal line intersects the nearest part of the curve, draw a line down to the x axis: the distance from (0,0) to where the vertical line hits the x axis is your tech transfer health index.
  • For example, in the diagram above, this tech transfer office’s health index is three. So this office has three patents that each earned at least $3,000 over their lifetimes. Of course when you chart your own health index with real data, your numbers will likely be much larger.

    So how are you doing?

    If you chart your portfolio and discover a long tail curve that’s very steep, your office is relying on a few patents that are earning most of your revenue. In other words, a low health index. Or, you may have a low health index if your long tail curve starts low and stays flat. A low flat curve indicates that your tech transfer unit is licensing a large number of patents but not getting a lot of revenue back from them. It’s not necessarily bad to not earn much revenue (after all, getting technologies out the door and into use should be the ultimate goal). However, a low, flat curve indicates you may be spending a lot of time and money on paperwork. However, an upside of quantifying a low health index of this type is that you can prove that your unit is managing a large volume of essential but unappreciated long tail-related paperwork.

    You have a high health index if — like a productive and impactful researcher — your long tail curve starts high and gently curves downward. This means your office has found the right balance between impact (high earning home runs) and productivity (large numbers of low-income licenses). Congratulations!

    Here’s the value of using the health index:

    Rewards real tech transfer activity, not just fees: Conventional ways to increase revenue such as charging high fees or striving for a home run license will not improve your health index. Instead, the health index improves only with consistent and long term licensing activity over a broad spectrum of technologies.

    Promotes true economic development: Your tech transfer office will have better ammunition with which to convince university administrators that there’s value in getting and maintaining a large number of low-revenue licenses from ”tail” technologies. You can now quantify more than just high-revenue licenses.

    Makes it possible to compare large and small universities: Tallies discriminate against small universities. The tech transfer health index makes it possible to directly compare universities that have very differently sized IP portfolios.

    Get credit for a well-rounded licensing portfolio: Your health index will confirm that your office is doing justice to the entire long tail curve of available technologies. You can point out that the large volume of low-earning, low-visibility patents and licenses may not earn a lot of money, but your office is effective in meeting the essential purpose of the Bayh Dole Act, to get technologies out the door into use.

    Versatility: The health index is versatile. Instead of patents, on the horizontal axis, one could plot other finite IP assets such as technology disclosures or startups. On the vertical axis, instead of using dollars, one could use other values such as the number of web hits for technology disclosures, or for university startups, capital raised.

    Widely applicable: the health index can scale up or scale down. It can be used to assess the performance of a single licensing officer, a group of universities, or an entire geographical region (innovation cluster), or an industry segment such as biotech or nano-scale manufacturing.

    Easy to use in public: If the names of the patents, technologies or whatever you’re analyzing are removed, it’s possible to publicly and safely share your unit’s health index results.

    Assess internal operations: You could use the health index as an internal management tool to figure out how efficiently you’re managing various aspects of your operation-related activities.

    In the unlikely event that someone were to interpret their metrics in a non-standard way, the health index would be harder to manipulate than standard straightforward tallies of new licenses, new startups, etc. However, realistically, no metric system is game-proof. For example, some researchers attempted to game the H-index by creating Citation Clubs where they set up fake “journals” with their friends and aggressively cited one another’s low quality papers.

    Consider how hard it would be to set up something like a Citation Club in a university tech transfer office. A tech transfer director, desperate to create a good impression on his higher-ups, in theory, could create a “Startup Club.” He could incorporate several “fake” startups (kind of like sham journals) that are wholly owned by the university. Next, this director could “negotiate” several licensing deal with himself (kind of like having his friends cite his articles) and put himself on the startups’ board of directors (hooray, another award on the CV!). He could assign a tech transfer office employee to be CEO of the startup (despite zero revenue and no product). Voila, in one fell swoop, this hypothetical tech transfer office could enjoy an increase in revenue, more licensed technologies, plus a few additional new startups. But realistically, no one would do this. Even in the unlikely event that someone created a “Startup Club” to improve their performance metrics, the Club would be promptly dismantled by the powers-that-be.

    If you have estimated your health index, I’d love to hear how it went. Is anybody willing to share their actual data with me?

    TOOL: Thanks so much to the person who created the Excel tool that calculates the health index. Some people had problems with the zip file so I put the tool into an older version of Excel and now it will download as a proper file, not a zip. You can download the tool HERE. It makes you a chart and calculates your health index.

    'Stoking Your Innovation Bonfire' shipped to nearly 90 countries

    Don’t miss an article (2,200+) – Subscribe to our RSS feed and join our Innovation Excellence group!


    Melba KurmanMelba Kurman writes and speaks about innovative tech transfer from university research labs to the commercial marketplace. Melba is the president of Triple Helix Innovation, a consulting firm dedicated to improving innovation partnerships between companies and universities.