Loading...
 

Articles

Using the Wisdom of Your Crowd

In his famous 2004 book "The Wisdom of Crowds", Surowiecki told us that in the game show "Who wants to be a Millionaire?", the Ask the Audience lifeline got questions right 91% of the time, far better than Phone a Friend. Surowiecki cites lots of research giving examples of how the combined independant opinions of a group of lay people have much better predictive power than one expert judgement.

We trust the same mechanism in jury trials- 12 laymen make better calls than the expert policeman.

But in business, unless the definitive facts stare us in the face, we mostly seem to default to accepting the intuition of someone in a leadership role.

The leader will have gained respectable expertise on their path to becoming a leader, but Tetlock's ground-breaking 2006 study "Expert Political Judgment" clearly demonstrated that acknowledged experts are just a bit better than random at predicting outcomes in their sphere of expertise, and substantially less reliable than just saying "no change".

The leader probably also takes advice, but Yaniv in 2004 showed that people assign rather low value to others' advice, especially where they feel they already know something about the subject. See et al in 2011 studied individuals' relative power and summarised that "higher power participants were less accurate in their final judgments".

So it seems clear that, if we can't get at all the facts in a reasonable time, we will get better decisions if we combine the judgements of many people.

How do you tap the wisdom of your crowd? It is very easy in principle, just 4 steps:

1. Decompose the problem
Breaking any problem down and looking at each part separately has been shown to increase forecast accuracy, so it's a natural place to start. It also allows you to ask different crowds about different parts of the problem, respecting both their interests and their time available to participate.

2. Identify your Crowd
You have a ready crowd around you- people who already speak the same business language, but differ in their perspectives. So no need to go out on the street and educate completely lay people. But make sure you get sufficient diversity in your crowd. If you can, involve customers and suppliers as well as colleagues.

3. Pose the Questions
You need to ask your crowd questions that they can answer (from their judgement, and whatever facts they have at hand). You also need to ask questions such that many answers can be combined. Mostly that means questions that can be answered with a number or a few numbers. With small crowds, a great approach is to ask each person for three numbers representing optimistic, most likely and pessimistic out-turns. These three together let you measure your crowd's confidence in their ability to predict, alongside the prediction itself.

4. Aggregate the answers
This is the trickiest part to do well, and I explain it in more detail in another post.

There is a fish-hook in this. Other research tells us that well-adjusted people are habitually over-confident, both in their ability to predict and in the outcome they predict. I will write another post on how you minimise the effect of over-confidence.

There is a printable PDF infographic depicting AcuteIP.com's approach to using the Wisdom of your Crowd

With enough preparation, you can do this yourself. AcuteIP.com offers a set of services to make it easy, from workshops and technical services to managing the whole project.
Contact Graham.Harris at AcuteIP.com to explore possibilities.

Is this post useful to you, or could it be useful to someone you know? Please do us both a favour- spread the word by sharing it through the colourful social media buttons at left or below.

(c) 2018 Graham.Harris at AcuteIP.com

Combine Responses from your Crowd

In Using the Wisdom of Your Crowd I explained how you have a crowd at your fingertips, and why studies show using their collective knowledge can generate much better business decisions than just delegating the decision to an expert. I laid out three simple steps to use the wisdom of your crowd:

1. Decompose the problem (combining estimates of several sub-problems gives better accuracy than a single estimate of the top-level problem)
2. Identify your Crowd (hint: they are already around you)
3. Pose the Questions (so they are answered with numbers)

Step 4, Aggregate the Responses from your crowd is still simple in principle. A spreadsheet can give all the information you need to communicate the best choice, how it was arrived at and the remaining uncertainty.

The responses to any question you asked can be represented as a statistical distribution curve, like the familiar Normal Distribution bell curve except that it is not symmetrical. In a range of cost or time estimates, the optimistic and likely points are usually closer and the likely and pessimistic points are further apart.

Triangle V PERT Distributions

You need at least three estimates but the more, the better precision you will get. A good way to get more estimates is to ask each member of your crowd to give you their own pessimistic, likely and optimistic estimates. That also gets your crowd members to think a bit harder about their estimates. There are ways to calibrate your crowd to get even better estimates.

A statistical distribution is a much richer way to represent your data than the single guess you might have put up with in the past. When you draw the curve, it becomes obvious how well you collectively understand the problem.

My lightbulb moment came when I realised that you can add, subtract, multiply or divide distributions in a spreadsheet as easily as if they were single numbers.

For example, you could build distributions for each of your company's product ranges, and simply add them together to get a distribution of next year's overall revenue that still encapsulates all the crowd estimates and the level of uncertainty they represent. The sales team might still work towards a single target, but corporate management now have a much richer view, especially when they do the same for overhead and variable costs.

Next you can use it for estimates of

  • how much revenue we will earn next year if we just continue as is, vs
  • the cost of a project to make our business perform better, and
  • how much revenue we will earn next year if we do the project


Now we can get a view of how likely the proposed project is to be the best use of time and money, compared to competing project proposals.

Or turn this on its head and add together the distributions representing the cost of mitigating a risk and the residual risk after mitigation. If you subtract that result from the distribution of the estimated risk today, you get another distribution, the value of mitigating the risk that way. Now you can compare different ways to mitigate risk, even though they may have quite different results.

Finally you can now compare business improvement projects on the same objective, numeric scale with mitigating business risk without over-simplifying; something that has largely escaped us until now.

There is a printable PDF infographic depicting AcuteIP.com's approach to using the Wisdom of your Crowd

With enough preparation, you can do this yourself. AcuteIP.com offers a set of services to make it easy, from workshops and technical services to managing the whole project.
Contact Graham.Harris at AcuteIP.com to explore possibilities.

Is this post useful to you, or could it be useful to someone you know? Please do us both a favour- spread the word by sharing it through the colourful social media buttons at left on PC or below on mobile.

Have you used techniques like this to evaluate risk or communicate uncertainty? Tell us your experience with a comment.

© 2018 Graham.Harris at AcuteIP.com

When do you need a Crowd?

We are used to thinking of measurements as concrete things with no margin for error. That has never been true. As I show in Everything is a Bet, even your car's speedometer reading can be way off you actual speed and still legally accepted.

If we don't have a machine to measure something, some of us go to the opposite extreme and assume it can't be measured. Or we express our feeling about the quantity on a simple scale like Red-Amber-Green or High-Medium-Low.

In his book The Failure of Risk Management Doug Hubbard demonstrates why using subjective Likert scales (like High, Medium, Low) to represent uncertainty is worse than useless, giving an unjustified impression of objectivity. He shows why you must develop statistical models with confidence intervals. But where do we get the information for statistical models?

Professor Philip Tetlock of the Wharton School of the University of Pennsylvania famously showed that we cannot rely on Expert Judgment, since experts vastly overrate their own ability to predict events, individually performing about as well, if not slightly worse, than the average daily reader of The New York Times.

So the traditional model, relying on the expertise of a responsible manager, just doesn't work.

Journalist James Surowiecki helped in 2004 by describing The Wisdom of Crowds, showing that by combining the estimates of many people we can get much better predictions than if we take the advice of a few experts. Surowiecki popularised the work of Scott E. Page, and a philosophical tradition before. Others like Colson, Cooke in 2018 have built on this, with techniques that reduce false confidence without losing too much specificity.

But how do you get a crowd of people to give you estimates that can build into the statistical models we need for Risk Management?

You already have the Crowd, your colleagues.
To be most effective, you should embrace diversity, bringing together people from many areas and different levels of responsibility.

Project Managers improve estimation accuracy by making the object being estimated as simple and concrete as possible. Work Breakdown Structure breaks complex things into simpler components, estimates those and aggregates back up.

Classic Project Management uses a simple 3-point estimate to express not just the most likely value but also the uncertainty, and the skew in probable outcomes. The result is a probability distribution as Hubbard demands. The usual PERT method over-simplifies, partly by assuming excellent estimators, but with care, we can improve on that.

In his 2009 book How to Measure Anything, Hubbard shows how to calibrate estimators, who on average are over-confident before training.

US National Academy of Sciences member Amos Tversky, and Nobel prize-winner Daniel Kahneman describe how Judgment under Uncertainty is subject to unconscious cognitive biases, some self-serving but often quite innocent.

There are ways to control for cognitive biases, particularly through the way in which estimates are asked for, and by eliciting optimistic and pessimistic estimates, before asking for a 'most likely' estimate. We can also improve estimates of confidence using techniques like the Equivalent Bet test which relate them to everyday quantities.

So we know how to gather many component-level estimates by many calibrated people. Now we must aggregate them all into an overall picture. Monte Carlo simulation is ideal for that. Monte Carlo simulation doesn't hide the uncertainty in the original estimates, but shows how uncertainty is reduced through aggregation.

Combining all these tools we can expand beyond the original idea of estimating a risk:
• We can transparently create estimates under uncertainty, and have a good understanding of how confident we should be in them.
• We can predict how the raw risk will change over time if we do nothing, the costs of one or more risk treatments and their impacts on the residual risk. Thus we can prioritise risk mitigation options.
• We can aggregate estimates of risks together to achieve an overall picture and estimated distribution. For example, we can build a consolidated view of the whole organisation's risk profile.
• Finally, we can estimate the costs and benefits of business improvement projects with confidence, and transparently compare their return on investment against risk mitigation activities.

There is a printable PDF version of this item.

With enough preparation, you can do this yourself. AcuteIP.com offers a set of services to make it easy, from workshops and technical services to managing the whole exercise.
Contact Graham.Harris at AcuteIP.com to explore possibilities.

Is this post useful to you, or could it be useful to someone you know? Please do us both a favour- spread the word by sharing it through the colourful social media buttons (at left on PC or below on mobile).

© 2018 Graham.Harris at AcuteIP.com

Estimating for Architects

A presentation I delivered to the Enterprise & Solution Architecture meetup group in Auckland.

IT Architects, business owners and others need to estimate many things at an early stage of development, when the cone of uncertainty is widest. This presentation suggests ways in which architects and other early-stage estimators can estimate both cost and value, systematically reducing the uncertainty.

Contact Graham.Harris at AcuteIP.com for help applying these principles to your situation.
Call +64 27 275 4396 for an initial discussion.

Give Everyone a Voice

I had a role in investing quite large amounts of money- up to $50m a year- on behalf of my employers. So I have been up close and personal with the decision making around hundreds of millions of dollars. And at times, it was not pretty.

Some of the decisions were effectively made at far too early a stage. For example, getting an investment idea listed on a 3-year road map can start a train rolling that's very hard to stop later. People can become invested in the idea, and challenging it seems disloyal or argumentative.

But it's important to remember that such long term plans often result from a very brief process. Perhaps one person thought it was a good idea, and no real evaluation was done of costs, risks and benefits.

So before you commit real money, resources and opportunity cost to something, you need to apply more rigour to investigating it. And in my experience that means consulting a much wider range of stakeholders.

My view has strong support in the 2004 book Wisdom of Crowds by James Surowiecki. A trivial example of his gives the idea: the US TV game show "Who Wants to Be a Millionaire?" offered contestants three lifelines to help answer its progressively harder questions;

  • take away two wrong answers (giving you a 50% chance to guess right)
  • Phone a Friend (expert judgment)
  • Ask the Audience.


Turns out, Phone your Friendly Expert got the right answer around 65% of the time, but the majority vote of Ask the Audience (random crowds of people with nothing better to do on a weekday afternoon than sit in a TV studio) picked the right answer 91 percent of the time.
There are many other examples, independent of Surowiecki; for example see the crowd experiment by the US National Public Radio .


Thing is, you have quite an audience among your colleagues. And they already know something about the specialist subject, so they are just as equipped to answer questions about it as the TV studio audience was about general knowledge questions.

You don't need to ask all your colleagues- at least, not if you work in a medium or large organisation. You can identify a few people with a diverse range of interests in the topic- diversity is the key- and ask them to nominate others. Diversity is the opposite of the the process by which the item got onto the road map, so we're immediately getting somewhere.

Once you have your audience, you need to ask the right questions to get the right answers. Not the answers you came looking for; the right answers. You can control for potential biases and other shortcoming by asking questions the right way, in the right order.

This can all be done quickly and efficiently, if you are clear about the process. It doesn't depend on key individuals being available: the essence is that the collective knowledge of the many gives you more than any individual could.

Have you got experience of decisions that could have gone better with wider input? Or experience that sheds another light on this issue? I welcome your comments.

Short on Facts

I am running a workshop at the Project Management conference this week. It's a very interactive session with software, so this is not a substitute, just a precis of the presentation part.

I’m A Speaker

Project Failure Is often just failure of Estimation.

We can get good at measuring cadence, and we can get a handle on development effort. But how good are we at estimating Benefit? or Risk?

Not just Dollars.

How likely is something to happen? Law change, Financial crisis or Sentiment change (plastic bags, Facebook)

Not just Projects either:

Acquisitions, Joint Venture, Re-organisations all happen with insufficient facts to make the decision easily.

What are facts anyway?

They are just measurements of something, and the measurements are full of errors. This graph shows how accurate the law requires our speedometers to be- you may be shocked. Speedo Accuracy

So when we are short of facts, we can use opinion. Opinion is valuable, useful. But we need techniques to gather opinons systematically, using their strengths while avoiding potential sources of bias.

One of the great things about opinions is there are so many of them. The old story- ask three economists, you are going to get four opinions. And that's a good thing, because it turns out if you have many opinions the things that are right about them tend to reinforce and the things that are wrong tend to cancel out.

You can measure how good someone is at giving opinions through something called a Brier score- it's a very simple, uncheatable measure that takes into account how often you are right, and how confident you are that you are right. See How Good are Your Estimates

And what you can measure, you can improve. You can be trained, and train others, to give better, more reliable opinions. See Can You Learn to Estimate

You can also train yourself to be better at taking opinions from others. Just the way you interact with them can steer them away form unconscious biases.

There is a printable PDF infographic depicting AcuteIP.com's approach to using the Wisdom of your Crowd

With preparation, you can do this yourself. AcuteIP.com offers a set of services to make it easy, from workshops and technical services to managing the whole exercise.
Contact Graham.Harris at AcuteIP.com to explore possibilities.

Could this be useful to someone you know? Spread the word ...

  • at LinkedIn, Like lt, and Share it with your contacts
  • at http://AcuteIP.com, Share it through the colourful social media buttons (at left on PC, or below on mobile).

Sharing posts adds to both our mana...

© 2018 Graham.Harris at AcuteIP.com

Can You Learn to Estimate

I had an interesting experience after presenting at the Project Management Institute Conference recently. During the Questions period at the end, a delegate told me flat out that I was wrong. He argued that the ability to estimate must be born into you and cannot be taught.

So what evidence is there, that estimating can be taught and learned?
Hands Up

What is an Estimate?

Let's start with a definition of an estimate.

I come across many 'estimates' which are just a single number, usually a dollar value or a date. The person estimating is staking their personal credibility on the out-turn being exactly that amount or finishing on that date. Almost every estimate turns out to be 'wrong' when stated like that.

If it turns out to be right- like many forecasts public companies make to shareholders- it's usually due to a mixture of over-cautious estimation and carefully massaging what's "in" and what's "out".

More useful to the person asking for an estimate is a range- 'it will return a benefit of $x to $y' or 'we will complete in between 2 and 3 months'.

A substantial fraction of estimates (and estimators) would still be proven wrong by that score, doing little good to either estimator or client.

To be really useful, an estimate needs to include the estimator's certainty. This is precisely the territory of statistics. A truly useful estimate is a actually statistical distribution. Most people providing estimates will not think of it in that way, thus there is a skill in eliciting a great estimate.

The quality of a forecast depends (in retrospect) on whether the actual out-turn value was within the range estimated, and the width of the range given. It's easy to be right if you give wide ranges, but not particularly useful.

Most useful, and my definition of an estimate, is a range qualified by a probability, like 'there is a 68% chance we will finish in 2 to 3 months'. This gives the recipient an indication how much confidence the estimator has in the range.

Can you measure Estimation?

To know that teaching anything is effective you have to be able to measure the student's ability 'before' and 'after' tuition.

This has been a concern since weather forecasting began, and applies equally to human-made and computer model forecasts. The weather forecasting profession now uses a Brier score which takes into account both the estimate confidence and whether the result came withinn the estimate range. 0 is a perfect Brier score; scoring 1 means 100% confident and 100% wrong!

You can compute a Brier score for a single estimate, and you can get an overall Brier score across any set of estimates. So you can measure an estimator's success before training and repeat after training to identify the difference.

Evidence for Teaching Estimation

In the simplest example, I ran a training course less than 2 hours long. Participants scored on average 0.48 (on a simulated problem) before training and 0.37 on a very similar problem after- a remarkably fast improvement.

How to Teach Estimating

Practice and feedback are needed to learn anything. The best estimators tend to be those who get plenty of both- the weather forecasters already mentioned, and some gamblers like poker and bridge players. So the basis of teaching estimation is to provide opportunities for practice and rapid feedback.
I teach a few simple methods to help people avoid some of the most common types of error, techniques to improve repeatability and to visualise probability. Just keeping these in mind has been shown to improve estimation measurably.

A distinct other method I teach is Crowd estimation. You can't teach a crowd to estimate, but you can teach estimators how to learn from a crowd.

How Good are your Estimates

Forecasting is fundamental

Forecasting (outcomes, benefits, risks) is fundamental to investing wisely.

We often focus on tight estimates of the cost of doing something, but pay less attention to the larger factors, the benefits and risks. Larger? Well, the benefit better be much larger than the cost and the risks, otherwise why take the risks?

But costs are more predictable, and it's easier to hold someone to account for costs. So we tend to obsess on what we think we can manage and pay less attention to the things we think are harder to estimate.

Some people are great estimators

A few group of people get very good at estimating (or get out). Those are people who make lots of estimates (maybe called predictions, forecasts or bets) and get very rapid feedback on whether they predicted correctly or not. They include professional weather forecasters, poker players, bookmakers and a few others.
738541main Biocular Twodisks
Their estimates include a probability of being correct, sometimes explicit ("a 20% chance of rain", or by offering odds on a horse about to race), sometimes implicit (whether they will place a bet or not).

Most of us just don't get the practice and particularly not the rapid feedback. We might regularly offer forecasts, but rarely remember our own forecasts (especially when they turn out wrong) or anyone else's. They are treated as they should be- as throw-aways, not worth the paper they are written on.

How to measure estimate quality

So how do we find out how good we are at estimating (or forecasting, if you prefer)? We could take up weather forecasting, but that would probably turn us off quickly because we would be so rarely correct. Or we could take up gambling, which would probably be much worse.
Roulette Table Us
Some 'virtual gambling' might work, like building a virtual portfolio of shares. Even that entails a lot of effort only loosely contributing to the task of measuring how good we are at estimating.

Turns out, the weather forecasting fraternity have a tool which can help, called the Brier score. They have used it for years to drive improvements.

The Brier Score of Estimate Quality

The Brier score for a single prediction is (P(T) - T)²
P(T) is the Probability you gave of being right (on a scale 0..1), and T is whether you were right (1 for right or 0 for wrong).

So the Brier score ends up as a number between 0 and 1.

A Brier score nearer to 0 represents you expressed high confidence in an forecast range that turns out to include the eventual value- you were confident and correct; and conversely 1 represents high confidence in an estimate range that does not include the actual value- you were confident but wrong.

The Brier score judges how realistic you were in expressing your confidence by the width of your forecast range: so you score better by a narrower range expressing higher confidence (if you turn out to be right) but worse by expressing high confidence (if you turn out to be wrong).

Your Brier score for one prediction or estimate will be different from the next one, because it depends both on whether you were right and how confident you were that you were right. You can get a representative personal Brier score by averaging the Brier scores you got over several predictions.

Most of us still don't do enough forecasts to build up a personal Brier score quickly, but there are some ways we can get an indication: using systematic 'general knowledge' or 'virtual gambling' to clock up a bunch of individual scores fast.

There's a saying that you can't improve what you don't measure, so does being able to measure our ability to estimate let us improve it? Turns out, yes it does. With relatively brief and simple training you can make significant improvements. This site has lots of information on that from Can you Learn to Estimate? to Estimating for Architects

Through just a few minutes of exercises in a workshop at the Project Management Institute conference recently, the average Brier score of about 50 project managers in the room improved from 0.48 to 0.37. That's something http://AcuteIP.com can facilitate for your organization.

Brier Score is just one part of a wider methodology that can improve your organization's ability to invest in the right things and get the best returns. Take a look at this printable PDF infographic depicting AcuteIP.com's approach to using the Wisdom of your Crowd

With preparation, you can do this yourself. AcuteIP.com offers a set of services to make it easy, from workshops and technical services to managing the whole exercise.
Contact Graham.Harris at AcuteIP.com to explore possibilities.

Could this be useful to someone you know? Spread the word ...

  • at LinkedIn, Like lt, and Share it with your contacts
  • at http://AcuteIP.com, Share it through the colorful social media buttons (at left on PC, or below on mobile).

Sharing posts adds to both our reputation.

© 2018 Graham.Harris at AcuteIP.com

4 Reasons to Avoid Consensus

also known as How to Get a Good Decision

1. Consensus reinforces Groupthink

The process to achieve consensus is essentially social, not analytic or generative. Usually one of the strongest drivers is the status quo, both of how things are done here and of the hierarchy of voices.
The first problem with that is that, how things were done up till now may very well not be how they need to be done going forward. In 1965, companies stayed on average 33 years in the S&P 500; by 1990 that was 20 years and the trend points to 14 years by 2026. More of the same is not a survival strategy any longer.

The second problem is that a loud voice or confidence doesn't guarantee its owner is well informed or a great judge of the situation and options. Others will have additional information and different experiences, and a good outcome depends upon it also being taken into consideration.

2. Consensus devalues Diversity

Through the social process to build consensus, the appearance is created that there is only one answer, and everyone believes in it.

Individuals may come out of the process still not believing it's the right answer. Calling it a consensus tells them that they must be wrong in everyone else's eyes. It tells them to get with the program, and be like everyone else. It destroys the independence of thought that modern organisations actively seek when recruiting staff. If two employees have exactly the same views, the chances are at least one of them is ripe for job automation.

When someone sets out to make a consensus decision, they can get to their objective most quickly if they select participants who already hold similar views. People from the same background as themselves.

3. Consensus selects for Less Worse, instead of Better

The process to reach consensus is a series of compromises. For every variation of view expressed, there is some give and some take. It leads to all participants conceding something, and the outcome appearing to be that no-one is particularly disadvantaged but everyone is disadvantaged a bit. Also known as "Equal misery for all".

Those experienced in this process are careful to demand much more than they need from it, so that they can concede some and still win, while appearing reasonable and collaborative.

4. Consensus hides the details

Fundamentally, consensus presents a single answer to a question which may well have no single answer. Each member of the consensus to some extent buries their own views in the interest of harmony (or worse, fear or self-doubt because of the higher status or greater self-confidence of other participants).

Commonly though, there is no single right answer at all, or the group lacks the information or tools to discover it. So the outcome is that participants collectively exaggerate their certainty, and hide from non-participants the range of possibilities that the participants could have brought to the table.

As a leader, do you want to work in an echo chamber where only the 'staff answer' is allowed? Or would you prefer to hear about all your problems and opportunities?

Does this mean consensus is worthless? The shallow kind, produced by facile meetings of just the 'right' people structured to get a quick result, that kind is worse than worthless- it's downright destructive of value.

How To Get a Good Decision

  • Actively include participants with opposing views and diverse experiences
  • Run the process so that all their voices are heard
  • Work up all the options so they can be compared on a level playing field

That might end up in a consensus, but whether or not it does, it will result in a decision that everyone will respect on its own merits.

With preparation, you can avoid the pitfalls of consensus and get robust information on all your options yourself. AcuteIP.com offers a set of services to make it easy, with workshops and technical services to managing the whole exercise for you.
Contact Graham.Harris at AcuteIP.com to explore possibilities.

Could this be useful to someone you know? Spread the word ...

  • at LinkedIn, Like lt, and Share it with your contacts
  • at http://AcuteIP.com, Share it through the colourful social media buttons (at left on PC, or below on mobile).


There is a printable PDF infographic depicting AcuteIP.com's approach to using the Wisdom of your Crowd

© 2018 Graham.Harris at AcuteIP.com

Everything is a Bet

Annie Duke is a psychology PhD turned Poker queen turned author of Thinking in Bets- see her 90 minute Google interview at Youtube

But I'll summarise for you. Everything is a Bet. And the sooner you internalise that, the better.

We don't know Jack, it's all just estimates, which amount to bets.


For example, your car's speedometer- do you know how accurate it is? how accurate it has to be, by law? It shocked me.

So to make any sense of estimates, we must talk the language of bets.


Poker players know that your odds change with what you learn as the game progresses.

Reverend Bayes turned that mathematical, but the principle is not so hard.

It just acknowledges that the odds change with every morsel of information you learn as you go.

So to reduce uncertainty (get better odds of success), you should carry out business experiments focused on reducing the odds at least cost.

Many business experiments can use just the resources you have easily at hand (your staff and sometimes their routine contact with customers and suppliers). They can be quick, decisive and invisible to your competitors.

With preparation, you can do this yourself. AcuteIP offers a set of services to make it easy, from workshops and technical services to managing the whole exercise.

Contact Graham.Harris at AcuteIP.com to explore possibilities.

There is a printable PDF infographic depicting AcuteIP.com's approach to using the Wisdom of your Crowd

Could this be useful to someone you know? Spread the word ...

  • at LinkedIn, Like lt, and Share it with your contacts
  • at http://AcuteIP.com, Share it through the colourful social media buttons (at left on PC, or below on mobile).

Sharing posts adds to both our mana...

© 2018 Graham.Harris at AcuteIP.com

  • «
  • 1 (current)
  • 2