Social media rant (another one)

Digital Evolution

Whether it’s family, friends or work colleagues, we all inevitably lose touch with our various networks – it could be argued this is a completely natural process, whereby over time we align ourselves with individuals we have more in common with, or perhaps who are more relevant given our current circumstances; either way, there is a sort of evolutionary selection process at play in the absence of digital influence.

Social media has of course changed this selection process over the past few decades, slowing the often predictable decay of a relationship through digital reinforcement. But how ‘real’ are these relationships; are we simply prolonging connections that should ‘naturally’ devolve, or is social media an innate extension to how we should be communicating? It’s entirely possible that with the maturity of technology such VR & AR, the observed lack of emotion inherent in online interactions may be alleviated; we may be on the precipice of an elevated, more meaningful social experience, or conversely we could be about to tumble into a world where human emotion is crudely distilled down to its component parts, represented simply as 🙂

We should also consider that our digital identities, and all of our related interactions, will persist online for decades in many cases – this also demands a change in behavior; what we say online, who we say it to, where we say it etc. For example, current or prospective employers can relatively easily search for the ‘real’ you online, rapidly building a profile of you as a ‘brand’; ill-advised posts on social media can come back to bite us all – how much should we sensor ourselves online, especially given many social services blur the lines of personal versus professional (e.g. FB/FB for Work)?

Burning Bridges – “I never burn bridges; I just fail to maintain them, and let them structurally degrade over time.” (Reddit shower-thoughts)

If you’ve never checked out Reddit’ ‘Shower Thoughts’, take 5 minutes and Google search; whilst the content is mostly light humour, some can be NSFW, so perhaps Google at home! The above quote made me chuckle, and then subsequently think a little more deeply about how these kinds of social interactions have radically changed given social media.

I don’t believe I’ve ever ‘burnt a bridge’, definitely not deliberately (and if I have, without my knowledge – sorry about that J), but pre-social media, ‘burning a bridge’ would have taken a good amount of mental and physical energy, whether in person or via voice. However today, with social media, we’re all directly connected to someone, whether that’s Facebook, LinkedIn or some other online service, and being a ‘friend’ or a ‘connection’ carries with it a certain level of professional courtesy and respect. Not accepting a social media invite, or *gasp* de-friending someone, is tantamount to social suicide, making the ‘burning of bridges’ that much easier to achieve. I believe this is even more prevalent with my Gen-Y colleagues, popularized by the movie Unfriended (admittedly, the film presents a fairly extreme reaction to unfriending).

A simple click online can have huge social ramifications; it’s an action that simply doesn’t exist in the real world.

There is a (perhaps obvious) reason for this type of online behaviour, or at least a proposed hypothesis, within the field of neuroeconomics – in short, “a computer does not require cognitive or emotional involvement, making our interaction with it much easier.” (Rilling, Sanfey, Aronson, Nystrom, & Cohen, 2004). Essentially, face-to-face interaction requires much more physiological and cerebral work in interpreting facial expressions & body language, unconsciously constructing a ‘model’ of the individual in terms of who they are, what they believe etc, whereas on the internet, as the old adage goes, ‘nobody knows your a dog’. For this reason, it’s far easier to express more extreme views, perhaps be a little more provocative than in ‘real life’, through not being as emotionally connected.

Talkin’ ’bout my generation

As a member of Generation-X (at least, according to the ONS graph), I’ve seen social media creep into both my personal and professional worlds, mostly for the better I would say, although certainly I’ve had to review my usage more recently, given how most of the site algorithms curate our content. I’m definitely starting to be more selective in my information consumption, and more importantly, in what I post in terms of updates and blog posts. I’ve also taken to spring cleaning my online identity, cleaning up unused sites (Google+ anyone?), even self-censoring in some cases.


Traditionally, we look to our ‘elders’ for advice and wisdom, and I believe that still holds true for the majority of subjects, but in the case of social media, I’m more closely watching my Gen-Y colleagues and friends, seeing how they manage themselves online, as ultimately these individuals will likely steer and define the online etiquette of the future. In doing so, I might also finally learn some of the current gen slang. Totes appropes.

TL;DR – social media should augment real world physical interaction, and not the other way around 🙂

Continue Reading

Curated, Digital Content for Everyone

Curated Information Feeds

For a good few years now, I’ve actively modified who I follow, and what updates I receive digitally through social and news media – with Facebook, this generally means choosing who I want to receive updates from, and in what order (or at all), which allows me to maintain ‘friendship’ with individuals I don’t necessarily want to receive updates from (there are only so many ‘inspirational’ quotes of the day one can take) – other sites like Instagram are by design more curated (images in this case), and I generally ‘digest’ news from the likes of Feedly. Read more, know more.  (which I would recommend opting for the premium package), and other free feeds such as Hacker News  and Techmeme .

Hyper-targeted Marketing

Whilst curated feeds, at least for me, are far more valuable and time efficient, I am also aware I’m creating a hyper-targeted feed opportunity for the owners of the tools, and their customers. Behavioural advertising is nothing new, Amazon and the like have been doing this for years, however when I as the user start to tweak and modify my own feed, it augments and optimizes behind-the-scenes behavioural algorithms that result in more targeted advertising. And it’s not just a pure advertising opportunity either – Twitter recently switched on their new algorithmic timeline feed Twitter’s new algorithm is now on for everyone as a way to retain users, given the user growth stall Twitter has been experiencing Twitter disappoints investors as user growth hits wall| Reuters  .

Whether it is social media consumers modifying their own feeds, or providers activating new algorithms, traditional chronological feeds are slowly being replaced with curated, targeted content. Casey Johnston talks at length about the evolution of the information feed, stating that “Services that not only passively anticipate your needs but ask you questions, and answer yours, would be a good next step” – Casey Johnston on the evolution of social media feeds. Johnston is speaking specifically to social media, or media feeds, but this concept extends to all other sorts of data provision.

Intelligent Information Ecosystem

If we think about large, global-scale big data initiatives, or companies attempting to provide the ‘right information, to the right people, at the right time’, curated content is King. Investment bankers, or lawyers, don’t want a chronological stream of news, events or research data; they want highly relevant, contextual intelligence. Ann Winblad once stated that ‘Data is the new oil’ (Is Data The New Oil?), but of course oil in of itself does not hold value, it is how oil is employed which provides value to consumers.

The obvious example of this is the invention of the combustion engine, which ushered in a new era of incredible automotive innovation, the ‘killer app’ in a sense, and has remained a core component in the transport industry for decades. Granted, it is currently undergoing significant change with the likes of electric vehicles and alternative fuels, but the fundamental engineering paradigm remains. The ecosystem that emerged surrounding this innovation is equally as impressive, from continual advancement in automotive engineering performance through to an entire insurance industry, the number of products and services available to consumers has been exponential in growth since the early part of the 20th Century.

Transforming the Digital Business

When we consider digital information, the ‘new oil’, companies now need to decide where they play in the digital ecosystem; do they look to manufacture the next combustion engine, something which everyone in some manner leverages and builds upon, or do they play in the space surrounding the killer app, providing additional tools and services which further enhance the core service offering?

Pioneering, and lasting digital enterprises, will be those which tap into a wide array of different digital business opportunities, connecting their internal data and devices to the external world. Partnering and collaboration will become ever more important, especially with tech focused firms – competitive edge once maintained through scale and global reach will become commodity, and more agile start-up organisations will start to erode market share through tech innovation and price competitiveness.

I Want it All (and I want it now)

When I think about what I want from a digital service, be that general news or something more specific, I want a highly curated, customizable & secure service. I’m happy for the tool or feed owners to utilize my data for commercial purposes, but there must be opt out options (with appropriate limitations on content) and there must be full visibility into how my data/meta is used. Services I use regularly are starting to already make these changes, allowing me to further customize my experience, access information wherever I am (on whichever device), and as long as they continue to offer the ‘best’ experience for me, then I’ll continue to use their service.

However, I am not tethered to these services in any way, I am not loyal to any one particular business. Tech innovation in many ways has made the user experience less sticky; it is quite easy for me to jump ship, and I can do so on a whim if the mood takes me. I think 2017 will start to see far more control and power over personal content afforded to the end user, which as someone working in the digital media industry, is both a terrifying and thrilling prospect!

Continue Reading

Reuters News & the Semantic Web

As time goes on, i’m becoming more and more confident of the potential of the semantic web – typically, I tend to be a ‘second mover’ when it comes to new technologies or trends, I’ll usually let someone else be the early adopter, or at least remain skeptical until a quorum has emerged. But with the semantic web, and Sir Tim Berners-Lee open, linked data vision, I was sold instantly – it just seemed to make sense.

As a result, the subject of my recent thesis was focused on certain semantic web concepts in relation to news headline development, specifically looking at ‘best practice’ elements which form an objective, relevant, descriptive and cognitively cost-effective headline, to see if a best practice framework, or methodology, could be derived from from user preference. Initial results appeared to confirm my hypotheses, but along the way, I began to explore the interesting parallel with linked data standards.

Typically, news headlines tend to follow the Subject–verb–object – Wikipedia, the free encyclopedia (S-V-O) structure utilised in general linguistic typology; for example, taking a collection of Reuters headlines (Subject–Verb-Object), we can see that same structure present –

  • Islamic State battling Kurdish forces in Northeast Syria
  • Pakistan paramilitary raids HQ of major party MQM in volatile Karachi
  • Obama announces changes for student loan repayment
  • PayPal sets up Israeli security center, buys CyActive

Each of these headlines initially follows the S-V-O triple structure, with a little more information appended to the end of the initial triple. Prior research indicates that this structure is somehow more initially ‘obvious’ to human psychology, easier to process cognitively, and interestingly, this same structure is used in the RDF specification, a declarative language influenced by ideas from knowledge representation i.e. language classification. Within the RDF world, information is presented in a Subject-Predicate-Object triple, identical to the linguistic Subject-Verb-Object triple. For example, if we take one of the headlines above i.e. the PayPal entry, run this through and entity extraction tools such as Calais Viewer and parse the resulting RDF using the W3C RDF Validation Service, I end up with a set of triples that look very similar to the linguistic subject-verb-object triple –

Roughly translated from the URI, this is telling us “PayPal Inc ticker symbol is EBAYP“. So within the news headline triple, we can see embedded RDF triples based upon the particular entity, and in many cases, RDF triples can be directly transposed into headlines themselves (although in this case the RDF triple is probably not exactly news worthy!).

I’m not breaking any new ground here, simply re-stating what is already known, but conceptually thinking about how we access triples helps us understand how we can derive value from that information. In the same way we use language to retrieve information from a S-V-O triple in our people interactions i.e. Chris works at Thomson Reuters, we can access similar information from RDF triples using a query language such as SPARQL Query Language for RDF. And that’s really the idea behind the semantic web, to promote a common framework that allows us to share data – making the connection between something ‘real’ like our Reuters news editorial function, and the work our Big, Open, Linked Data (BOLD) team is undertaking around Linked data.

Continue Reading

Data Privacy Musings, February 2016

The global Data Privacy landscape is currently a fairly chaotic, uncertain place – certainly the Snowden revelations, along with Wikileaks and similar have sparked all sorts of questions around who has access to what; global surveillance, monitoring and encryption have been hot topics for Silicon Valley and global governmental leaders There is no ‘compromise’ in encryption debate between Silicon Valley and government leaders, and now we see these issues once again knocking at our front-doors. This is not exactly breaking news, as Data Privacy has always been a concern of business and technology professionals, however, the issue is seeing much more media coverage today than previously, which is resulting in more scrutiny and attention by both clients and regulators alike.

Safe Harbor no more

For example, in October last year (2015), something called the Safe Harbor Agreement was invalidated by the European Court of Justice Europe’s Top Court Strikes Down ‘Safe Harbor’ Data-Transfer Agreement With U.S.  |  TechCrunch ; some 4,700 companies relied on Safe Harbor to operate businesses that handled European data in the US, hence the ruling was met with a fair amount of alarm. Recently, a new transatlantic data transfer deal was announced between the EU and the US – Privacy Shield Data Transfer Deal To Replace Safe Harbor (this does beg the question, will there now be ‘Agents of Shield?’). This new agreement is essentially proposed to act as a more robust, tighter Safe Harbor, but already critics are voicing their concerns. MEP Jan Philipp Albrecht had this to say –

“The EU Commission’s proposal is an affront to the European Court of Justice, which deemed Safe Harbor illegal, as well as to citizens across Europe, whose rights are undermined by the decision. The proposal foresees no legally binding improvements. Instead, it merely relies on a declaration by the US authorities on their interpretation of the legal situation regarding surveillance by US secret services, as well as the creation of an independent but powerless Ombusman, who would assess citizens’ complaints,”

What is clear, from both Albrecht’ statement and general chatter around the topic, is that global content corporations and SMB’s will need to remain highly flexible in their architectural planning, and be able to adapt rapidly to changing laws. From a technology standpoint, this is a significant challenge, and is especially critical given the trend towards cloud computing. Cloud hosting can potentially remove the technical headache and cost of management for technology departments (although these functions would still be ultimately responsible for the security of the content – who holds the right to what information, and where that information is physically located.), but for most businesses, this would mean a significant change in thinking around perceived security of content by end users – public & hybrid cloud configurations still harbor concern for many clients and content owners, and requires more of a cultural change than anything else.

GDPR, and Global Data Protection

Data Privacy also presents a significant logistical, process and regulatory challenge, as well as the multi-faceted technological challenge businesses are faced with. In addition to compliance with the likes of the incoming GDPR (and global equivalents), there are a multitude of Laws & Regulations and Guidelines on the Protection of Privacy and Transborder Flows of Personal Data which need to be adhered to. To give an idea of how complex all of this really is, check out the below provisions introduced by the upcoming GDPR (source: Data Privacy Monitor) – this is merely an extract of key provisions financial institutions will need to consider once the GDPR goes ‘live’:

  • The law applies to any controller or processor of EU citizen data, regardless of where the controller or processor is located. (Under the 1995 Directive, only controllers were directly liable.)
  • EU Data Protection Authorities have been given new powers, including the ability to fine organizations up to 4% of their global turnover for violations of the new GDPR provisions.
  • In the event of a data breach creating risk to the “rights and freedoms” of EU citizens, notification must be made to the relevant data protection authorities within 72 hours of discovery of the breach.
  • Personal data of EU data subjects should only be collected for “specified, explicit and legitimate purposes and not further processed in a way incompatible with those purposes.”
  • Processing of EU citizens’ data will only be lawful if the processing is done in accordance with one of the following 6 grounds: (1) with explicit consent of the data subject, (2) to perform a contract, (3) to comply with a legal obligation, (4) to protect the vital interests of the data subject, (5) to perform a task in the public interest, or (6) where “necessary for the purposes of the legitimate interests pursued by the controller or by a third party, except where such interests are overridden by the interests or fundamental rights and freedoms of the data subject, which require protection of personal data, in particular where the data subject is a child.”
  • A data subject’s consent will be invalid if the controller requires consent for the provision of a service where the processing of personal data is not necessary to the actual performance of the service or contract.
  • Data controllers must provide any information they hold about an EU citizen free of charge and within one month of request.
  • EU citizens have a “right to erasure,” which requires data controllers to delete personal data if: (1) the data are no longer necessary in relation to the purposes for which they were collected or processed; (2) the data subject withdraws consent on which the processing was based and there is no other legal ground for processing the data; or (3) the data were unlawfully processed, among other grounds.

Now consider this is but one regulation (albeit a far reaching one) – take a look at Forrester’s Global Data Protection and Privacy Heatmap; this provides a snapshot into global data protection and privacy requirements. It’s quite the minefield!

The Perception of ‘Privacy’

On the other-hand, I can’t help thinking that perhaps many of these issues will simply dissipate over time, as society evolves (relaxes?) its thinking around privacy in general. I’ve had many conversations with individuals in the Generation Z demographic, and they generally are less concerned around where and how their PII data is utilised, more around how they can leverage their own data for their personal gain (whether that be ‘selling’, or promoting in some manner). It is these same individuals that will be running the corporations of tomorrow, setting the appropriate laws and outlining guidelines, so perhaps much of this will simply cease to be an issue. That’s not to broadly paint Generation Z with a brush of indifference around privacy and confidentiality, it’s just that perhaps their baseline is more transparent than my own; Gen Z is generally much more tech savvy when it comes to online privacy, opting for tech such as bio-metrics over passwords, anonymous chatting apps over email, or simply understanding the privacy settings of social media. An article I read a while ago included an amusing quote that resonated (being a Gen Y-er myself, ):

“As far as privacy, they (Generation Z) are aware of their personal brand, and have seen older Gen Y-ers screw up by posting too openly,” (

The Data Privacy Advantage

There is no panacea here – organizations will simply need to remain diligent, flexible and able to adapt rapidly to changes in the privacy landscape – simply demonstrating competence in this space will be a good start, to ensure that regulators and auditors recognize efforts to remain in compliance, but obviously with the ultimate overarching goal of staying one step ahead of any more impactful, legal requirements.

Business & technology leaders need to balance functionality with security – laws and regulations cannot be so stringent to suffocate commerce (and innovation), but equally they must protect commercial and individual interests; companies investing in data can no longer only focus on the value of content, they must also consider how it is protected.

But, this same focus on protection, provenance, auditing and reporting of data can be considered a competitive advantage – the differentiator between two seemingly identical content providers could easily come down to the privacy policies, robustness and transparency offered by the vendor(s).

Perhaps we shouldn’t see data privacy legislation as simply something to be compliant with, but also as another way to compete – this opportunity will likely dissipate over time, as the privacy landscape settles and commodity solutions emerge, but in the interim, early adopters, contributors and champions of data privacy could well provide a considerable advantage in the market.

With the astonishing growth of global data & content, advances in technology and new, disruptive innovation we’re seeing in commerce, 2016 is set to be a fascinating year for data privacy.

Continue Reading

Identity Access Management on the Blockchain

I’ve been looking at Blockchain technology recently (with some level of confusion), in regard to it’s potential application to IAM, and more generally around the cryptocurrencies that have emerged following Bitcoins success – it still remains to be seen how the Bitcoin implementation will continue operating once all coins have been mined, as the model shifts to a more transactional fee based set-up, but certainly the underlying technology and approach is intriguing. Applied to the authentication world, the concept of ‘letting everyone in’ as opposed to the more traditional (and share holder friendly method..) of ‘keeping people out’ feels like the right approach given technological trends towards ‘openness’ (and the fact most black-hats are uber-intelligent techies either pushing a particular social/moral/political agenda or simply doing it for the ‘thrill’ – both communities that won’t be disappearing any time soon), but the lack of an incentive component (more on that shortly) seems prohibitive.

First things first, I highly recommend these sources to understand the blockchain model in more detail How the Bitcoin protocol actually works | DDI & Learn Bitcoin: Transaction block chains | Bitcoin |Khan Academy, but essentially, Bitcoins implementation relies on two broad themes – openness and incentive (arguably a good model for any technology ). The openness piece is the sharing of a peer to peer public ledger of transactions (the blockchain), which everyone on the network has an up-to-date copy of (e.g. via your Bitcoin wallet), and which everyone can use to validate transactions – one never actually owns a Bitcoin per se, simply a record of your assigned Bitcoins held in the public ledger. The incentive piece is what makes the model work – and was the piece up until recently I was having difficulty understanding. First we have to understand the Bitcoin miner.

The concept of a Bitcoin miner is an individual that validates transactions and then broadcasts the validity of those transactions back to the wider network – there are obvious issues with this; if it was a simple validation look-up, then I could hack the system, begin double/triple spending Bitcoins, or hack the entire blockchain for my own nefarious ends. So instead, before I can communicate out that a set of transactions are valid, I have to solve a puzzle, to prove that my validation of the transactions is true; this is known as the ‘proof-of-work’, and requires huge computational power (making it largely impossible to hack the system) given I need to run millions of calculations to find the correct answer.

The puzzle involves something called a hash function (Bitcoin uses the SHA-256 Hash function), and essentially I have to produce a hash string based upon two inputs – firstly, the combination of all transactions currently in my block awaiting validation, and secondly, a variable modifier, known as the ‘nonce’.  The Bitcoin protocol requires that the generated hash be lower than a set ‘target’ (this target is constantly modified to ensure a set threshold of solvability, measured in time taken to solve, usually set at around 10 minutes) – once I generate a hash which satisfies the target, then the problem is solved. At this point, I’m able to broadcast my message that the transactions are valid to the entire network, and then subsequent nodes (users) on the network can use my produced hash to validate the problem (given they all have the same set of transactions in the block to use as input, given the public nature of the ledger, they should be able to produce the same hash – if they cannot, then my solution is not valid and the transactions invalid). This hash is then stored against the current block, connected to all prior blocks (and their hash codes), creating the ‘block chain’.

 Note: the above diagram uses preceding 0’s in place of the ‘target’, which essentially act as the same thing – using the blocks transactions and the ‘nonce’ variable as an input, I need to produce a hash with a set of leading 0’s – once I meet the target set of preceding 0’s, I have solved the problem.

This is the incentive part – why would I spend my time solving these complex problems, investing considerable funds in hardware & processing power? Well, because in solving a problem, I receive a set number of Bitcoins as a reward. Solving these problems becomes increasingly complex over time, requiring more and more intensive compute to calculate the answer – this is why Bitcoin Mining communities have emerged, to harness the power of ‘cloud’ to exploit global processing power. In order to hack the blockchain, I would need to update 1000’s of nodes with modified block hash’s, at huge scale and speed; this makes blockchain practically impossible to hack given the colossal processing power required.

It is this open and transparent approach to maintaining a financial ledger that is most interesting in the technology, the way blockchain is fundamentally changing how a transaction operates – this same paradigm shift could be seen within the Identity Access Management world; we stop building walls for keeping people out, rather opening the front door and letting people in, with processes managed by the community, and data secured by incentive and scale through the blockchain model. As I mentioned earlier, we probably need to see how Bitcoin (and others) evolve past their initial incentive model, to understand whether this is a long-term play, but at this point-in-time, it certainly seems a viable option for IAM.

Continue Reading

Social Collaboration

Does your business have a social network? Is it an Enterprise social network? What collaboration tools do you have, are they the same as social tools? There are a number of different ways to describe social and collaborative tools, and they are often used interchangeably. This is fine in a general sense, as the two of course overlap, but when you start to consider what your social strategy is for your business, we need to dig into the detail a little, to understand what that means for different groups of individuals.

What is Social Collaboration?

In terms of a broad definition of social collaboration, Wikipedia’s definition works for me “Social Collaboration refers to processes that help multiple people interact and share information to achieve any common goal. Such processes find their ‘natural’ environment on the internet, where collaboration and social dissemination of information are made easier by current innovations”. Then we have a chap called Aaron Fulkerson, over at The Future of Collaborative Networks, has a nice example of what he thinks makes social networks distinct from collaboration networks (which I tend to agree with) –

Generally, technology or content businesses today need both social andcollaborative capabilities layered on top of their entire product portfolio – it’s the expected standard, and will eventually likely become commodity in its application.

I usually break down social collaboration into three camps – (1) external tools such as Facebook, or Twitter, (2) internal private client networks (Jive, Tibbr, Salesforce etc) and (3) internal private employee networks (intranets). To be a truly next-gen social technology company, businesses need each of these communities to interact with each other, as the value that can be potentially derived from that interaction is considerable.

For example, the intersection of (1) and (2) – clients that receive a good experience on your product(s) are more likely to share their experience on external media, if the ability to do so is quick and obvious – this doesn’t necessarily require any deep level of technical integration, but it does require an alignment of external social strategy and internal client network strategy; one needs to be able to monitor and influence that client activity (even promote it, if it makes sense). That internal private client network should also be deployed across the entire organization (or at least plug seamlessly into existing networks), in order that organizations can open up cross business and/or client community interaction. For example, if you serve both legal and financial professionals, there is probably an opportunity (and demand) to connect these two groups together.

The intersection between (2) and (3) is also critical – ensuring that your employees have connections into these client communities. Whether this is through a community manager, as a subject matter expert, or a business development executive, the potential for strengthening your client relationships and identifying cross-sell opportunities is huge.

The sweet spot, area (4), where all three broad communities interact, is a fascinating convergence point – your employees, your clients and external communities all interacting, connecting and collaborating; this is where truly innovative ideas are likely to incubate, where employees experienced in your current product lines are interacting with your clients, who are looking for new features, and external communities contributing additional market intelligence – it’s a cornucopia of product development loveliness!

Google, obviously..

So who does this well right now? It’s difficult not to look at Google , who has it pretty well nailed in terms of connecting all three of these pieces.

For example, using my Google Identity, I can access any number of different tools. I can head over to Google+ (recently revamped) to catch-up on friends & colleagues activity, or search for a particular area of interest and subscribe to feeds; I could create a private group and use Google Apps for Work to collaborate around an idea or project (and store my files in Google Drive), or jump into a discussion around that same idea (Google native discussion boards, or Stackoverflow if technical). All of this activity is completely integrated as far as the user is concerned. I know i’m using Google at all times, social and collaborative tools are never more than 1-2 clicks away, and all of these features are easily discoverable through their core search offering. In addition, some of these these features are linked to a ‘free to try, more to buy’ commercial model (GDrive, Apps for Work, App Store/Google Play etc). The whole experience just, well, works.

Of course, Google for a long-time focused on a ‘single’ service offering, search, and then augmented that service with additional features – that laser like focus on a ‘single’ tool in the early days, and their subsequent success, allowed them to diversify and innovate safely, to move quickly and capitalise on market trends (in fact, Google recognises this fact, calling it out in their Ten things we know to be true). We obviously don’t all have that same luxury – businesses are established and grow in a multitude of different ways, but we all have at least one thing in common – our clients are human, and therefore social, hence the opportunity to connect with our clients, to build more meaningful relationships via social tools, is an opportunity we should all be taking advantage of.

Social Collaboration Strategy

Big picture, what are the kind of things businesses should be considering in their social collaboration strategy? A few points that I think are critical to overall strategy –

  1. Wherever possible, deploy a single social collaboration solution for your entire organisation, and consider how to embed that solution within existing products. As the global economy evolves and new markets emerge, there is no doubt traditional business models will be challenged – the ability to collaborate across these emerging areas of commerce will be vital.
  2. Think about how to connect that platform (above) to both the external world and your internal employee community, to start breaking down walls and connecting directly with individuals that are potential customers, or future hires. Interacting with all parts of your (commercial) world should be seamless and consistent.
  3. Take steps to connect your broader social media strategy to your client and collaboration strategy – the former is about building brand, attracting new clients and selling products (amongst many other things); the latter is about enablement and retention – the two are inextricably linked, but often separate initiatives.
  4. Understand your community. Launching a social collaboration network, that connects you, your employees, your clients and the wider world can beincredibly powerful, but it can also be extremely damaging if not carefully planned. Ensure you fully understand why you’re connecting these communities, approach it like you would any other requirements gathering process, software or business.
  5. Be consistent. Your brand message should be present in everything you do, especially social collaboration – clients and external communities should always be aware of who you are, and what you do, and your employees should be advocates of that strategy.
  6. Ensure the business invests and buys-in to the strategy. This will require significant work – be able to explain a reasonable ROI, put together a solid business case, reference industry statistics and vendor success stories. If the business is not fully behind a social collaboration strategy, the project willfail.

There are obviously many other considerations around any sort of social strategy – the need to carefully consider what content is published where, consideration of data privacy, residency and other legal concerns, the need to carefully manage and promote social capabilities, appropriately seeding conversations and acting as the subject matter experts in advisory situations and so on. But these are all considerations and risks to be managed, as right now, it is entirely expected that a business should provide some level of social collaboration capability – it ought to be a part of your business development DNA.

The challenge (and opportunity) today is how you connect your various communities together effectively, to realise real, tangible value.

Continue Reading