Beneficial owners survey methodology

Beneficial owners survey methodology

  • Export:

Regular readers of our annual beneficial owners survey will notice some substantial changes to this year’s methodology and presentation. The most substantial change is to the weighted tables. This year the raw data tables replace the unweighted tables.

They are identical – but with an easier to understand name – as the methodology remains a simple average of the relevant scores to that table.

The weighted tables were calculated using a three-stage process, allowing for how important respondents considered each service category to be, the size of the respondent’s lendable portfolio and differences in the generosity of respondents’ ratings in the three regions.

These tables have become our headline tables. We consider these adjustments improve the relevance of the results – that is why we made them. The other major change is to how the results are presented. The regional and global scores of the agent lenders, set out in alphabetical order, are positioned next to each other so readers can compare internationally at a glance.

Likewise, the service category scores are positioned so readers can identify the strengths and weaknesses quickly. The winning score in each case is coloured red. Due to the efficiency of presenting the results in this way, for the first time we have room to present both the weighted and raw data service category tables.


Methodology
Beneficial owners are asked to rate the performance of their securities lending providers. Respondents are asked to rate their service providers across 12 service categories (see below) from 1 (unacceptable) to 7 (excellent). There are two methodologies – weighted and unweighted.

Unweighted methodology
All valid responses for each lender are averaged to populate unweighted tables. All beneficial owners’ responses are given an equal weight, regardless of the size of their lendable portfolio. All categories are given equal weight regardless of how important they are considered to be by respondents. No allowances are made for regional variations.

Weighted methodology
Step one – weighting for lendable portfolio: The weighted table methodology makes allowances for both the size of the respondent’s lendable portfolio and how important the respondents, on average, consider each category to be – these were considered separately in the 2013 survey. An allowance is also made for differences between average scores in each region to make meaningful global averages.

Weightings are attached for the size of the respondent’s lendable portfolio. This means a greater weight is given to the views of larger beneficial owners relative to smaller ones.

Weightings are generated according to the size of the respondent’s lendable portfolio. The boundaries are set, based on data collected in the 2013 survey, so that there is an equal amount of beneficial owners in each group (<$500m and $500m-$2bn are given an equal weight of 0.6 as they contained half as many responses each as the other bands in 2013 – the information is collected separately only because it is useful when analysing the data). The mean band of $5bn-$20bn is given a weight of 1 – logically, an average client is given a neutral weighting.

The weightings centre on 1 to preserve comparability with unweighted scores. If a respondent does not divulge its lendable portfolio size a default weighting of 1 is used to calculate its weighted rating. The bands are as follows: Lendable portfolio Weighting < $500m 0.6 $500m-$2bn 0.6 $2bn-$5bn 0.8 $5bn-$20bn 1 $20bn-$50bn 1.2 > $50bn 1.4 For clarity, if the lendable portfolio is, for example, $37bn, that respondent’s unweighted scores are multiplied by 1.2 when the average is taken for the weighted table.

Step two – weighting for importance: An additional allowance is made for how important beneficial owners consider each category to be. This is done to acknowledge the fact that beneficial owners consider some categories to be more important than others.

Respondents are asked to rank each service category in order of how important the function is to them. An average ranking is then calculated for each of the 12 categories (11= highest ranking, 0 = lowest). This number is then divided by 5.5 to give a weighting within a theoretical band between 0 and 2, with an average of one. Again, basing weights around one is done to preserve comparability with unweighted scores.

To illustrate, if every respondent considers category X to be the most important it would get an average rank of 11. This is then divided by 5.5 to provide the weighting for category X – 11/5.5 = 2.

Step three – weighting for regional variation: In the calculation of the global average and global total scores, a final weighting is added to allow for the discrepancy between scores awarded in different regions.

There are two sources of regional variation. First, the average lendable portfolio size is different in each region. Second, there is a difference in how generous respondents are when rating their lenders. An adjustment is made because this survey aims to help beneficial owners benchmark lenders against their competitors, otherwise it just tells us that lenders are better in region X than region Y, which is not very helpful for a lender in region X when choosing a lender.

Once the above weightings have been applied to the regional tables, an average score is calculated for each region. A weight for each region is then calculated and applied to scores to make the average score for each region equal. This means that scores in the region with the highest average ratings will be factored down and the one with the lowest factored up, when calculating the weighted global total and average scores – the region with an average score in the middle could potentially be factored up, down or remain unchanged depending on the data.

TABLES AND SCORES
Overall tables
The overall table contains all responses for a lender regardless of its relationship with the beneficial owner, whether custodial or agent. The following scores are calculated: separately for each region, a global total, a global average and for each service category.

Regional scores are the average of all responses from beneficial owners based in that region (it is the location of the beneficial owner, not the lender, that is the relevant). There are three regions. A lender must receive a different minimum number of responses to qualify in each – seven in the Americas, five responses in Europe, Middle East and Africa (Emea) and four in Asia Pacific. To qualify globally, a lender must qualify in at least two regions.

Custodial and agent lender tables
Ratings of agent lenders acting in a custodial or third-party capacity are recorded in separate tables. The respondent is asked to define their relationship with the lender: custodial, agent or both. If the relationship involves both forms of arrangement, the response counts for both the custodial and agent lender tables.

Therefore, some responses will be included in both the agent and custodial lender tables. All the scores calculated for overall lenders will be replicated for custodial and agent lenders separately. The qualification criteria are lower for the custodial and third-party agent lender tables compared with overall. To qualify for either the average and total score custodial and third-party agent lender tables, lenders need five responses in the Americas, four in Emea and three in Asia Pacific.

Most improved
The lender that improved its overall unweighted score by the greatest margin over its equivalent 2013 score is the most improved firm. Lenders are ineligible if they did not qualify for the 2013 survey. Due to a significant change in the methodology it is impossible to compare weighted scores between 2013 and 2014. This will be added for 2015.

Service categories
Respondents are asked to rate each of their providers from one to seven across 12 service categories. The ratings of respondents for each service category are averaged to produce the final score for each provider.

The service categories are: 
- Income generated versus expectation 
- Risk management
- Reporting and transparency
- Settlement and responsiveness to recalls
- Handling of corporate actions/dividends
- Collateral management
- Relationship management/client service
- Market coverage (developed markets)
- Market coverage (emerging markets)
- Programme customisation
- Lending programme parameter management
- Provision of market and regulatory updates

To qualify for each service category table, the lender needs the same amount of responses as it does to qualify for the corresponding main table; ie, to qualify for an overall, custodian or agent lender service category the lender must qualify in two of the three regions – for example, five responses for that category in the Americas and four in Emea. A lender can qualify in some categories and not others – it does not have to qualify globally for all service categories to be any particular service category.

VALID RESPONSES
For a response to count for the purposes of qualification, the beneficial owner must rate the lender in no fewer than nine of the 12 service categories – it can tick “n/a” in no more than three service categories.

It is possible for a lender to qualify globally or regionally without qualifying for all service category tables, if it receives “n/a” responses for certain categories. For example, it may not offer emerging market coverage and therefore receive a string of “n/a” ratings in that category but qualify for all other categories, regionally and globally.

If a lender receives two or more responses in the same region from the same beneficial owner, an average of the ratings will be taken and it is considered to be one response for qualification purposes. If a lender receives two or more responses from the same client in different regions – for example, pension scheme X rates lender Y in Emea and the Americas – the responses are not averaged and are counted as separate responses for qualification purposes.

  • Export:

Related Articles