Step 2 – Establishing KPIs (Key Performance Indicators) and milestones to measure whether the institutional objectives are being met
There is a widely accepted management principle worth subscribing to. “What gets measured gets managed”. It is important for any strategic thrust to know your current level of performance, what performance you aspire to, and how will you know if you get there. This can only be assured if you develop performance indicators that you can measure for each institutional objective. While there are many performance indicators that can be used for each institutional objective, there are just a few that ultimately capture the essence of whether you have achieved your objective. These are called Key Performance Indicators (KPIs).
In STILE point 38, I presented an example of a strategic thrust to become more customer focused. To become more customer focused, several in-process performance indicators can and should be measured, such as whether client strategies were developed, client relationship managers assigned and whether client meetings have taken place. However, the Key Performance Indicators are the cumulative scores on the client satisfaction surveys and the annual client feedback report. On a scale from 1 to 10, the initial satisfaction scores for each research department ranged from 4.3 to 6.5. The KPI for client satisfaction for all departments was set at 8 with the expectation that it would be raised to 9.
Using client satisfaction scores as key performance indicators can reap great rewards if performed well and used properly. In my experience, most R&D organizations either do not have such a KPI, or if they do, their execution actually hurts rather than helps their relationship with clients. In designing and executing a client feedback process, there must be a genuine management commitment to commit the time and resources to obtain maximum feedback and act on the information received quickly and responsively. Nothing hurts a client relationship more than asking for feedback, obtaining a negative response, and not following up immediately.
First and foremost, the survey must be designed to ask the right questions that target both current performance and expected future performance. As mentioned preciously, this will provide information on how well you are providing current products or services and what products or services may be required in the future. This can be used to drive your R&D agenda. Second, the client feedback process should be selectively used on your most important clients since to do it right requires a significant amount of time and effort. If you have a client relationship manager, formal or not, that person should conduct the survey in person if possible. If not, then by phone or video conference. We are all familiar with impersonal email surveys and how ineffective they are. The average response rate is usually less than 10%. By using direct contact and persistent follow-up, you should aim for greater than 80% response rate.
These survey results, both individual and cumulative, should become part of the performance reporting system that gets published and discussed at each management meeting. Any negative feedback from surveys should be immediately communicated to the R&D team working on the project and the cognizant manager. The appropriate manager then needs to follow up expeditiously, expressing concern and a willingness to correct the problem.
While time consuming, the client feedback process is an excellent way to ensure that your entire team is client focused and committed to meeting your client’s current and future expectations. It is also an excellent method to objectively measure an R&D team’s performance on individual projects as well as cumulative annual performance. I have heard many excuses for not implementing such a feedback system. “We don’t have the time”. “We don’t want to bother the client”. “Our clients are reluctant to give negative feedback”. Once an organization successfully implements such a system and sees how effective it can be, they will wonder why they didn’t implement it sooner.
Let me give you an example from personal experience. After implementing such a system, I met with a Division Director from one of our government clients. At the time, my research staff were conducting several projects supervised by the Division Director’s technical project managers. When asked how many staff worked for me I responded, “My staff don’t work for me they work for you”. He laughed and responded “all my technical contractors say that”. I responded, “That may be true but I can prove it”. This got his attention. I showed him a table of performance scores given to my R&D teams by his technical project managers for each of the projects conducted during the year. I then showed him the criteria I used to conduct annual performance reviews for my research staff. The criteria were heavily weighted toward these client feedback scores.
I then said “the annual raises and promotions given to my staff are based on their performance reviews. The performance review scores are heavily dependent on the feedback scores that your technical project managers give my staff. Therefore, your project managers determine the raises and promotions received by my staff. Who do you think my staff work for, me or you”! The next time we had to compete for a government contract with this government client, we scored a perfect 10 on the management section of our proposal.