This page was exported from Free valid test braindumps [ http://free.validbraindumps.com ] Export date:Thu Apr 17 9:49:23 2025 / +0000 GMT ___________________________________________________ Title: (2025) PASS 8011 Exam Free Practice Test with 100% Accurate Answers [Q25-Q42] --------------------------------------------------- (2025) PASS 8011 Exam Free Practice Test with 100% Accurate Answers 8011 dumps Free Test Engine Verified By It Certified Experts PRMIA 8011 Certification Exam is a professional certification program designed for professionals seeking to demonstrate their knowledge and expertise in credit and counterparty risk management. Credit and Counterparty Manager (CCRM) Certificate Exam certification program is offered by the Professional Risk Managers' International Association (PRMIA), a non-profit organization dedicated to advancing the practice of risk management across all industries and sectors.   Q25. If A and B be two uncorrelated securities, VaR(A) and VaR(B) be their values-at-risk, then which of the following is true for a portfolio that includes A and B in any proportion. Assume the prices of A and B are log- normally distributed.  VaR(A+B) > VaR(A) + VaR(B)  VaR(A+B) = VaR(A) + VaR(B)  VaR(A+B) < VaR(A) + VaR(B)  The combined VaR cannot be predicted till the correlation is known First of all, if prices are lognormally distributed, that implies the returns (which are equal to the log of prices) are normally distributed. To say that prices are lognormally distributed is just another way of saying that returns are normally distributed.Since the correlation between the two securities is zero, this means their variances can be added. But standard deviations, or volatilities cannot be added (they will be the square root of sum of variances). VaR is nothing but a multiple of standard deviation, and therefore it is not additive if correlations are anything other than 1 (ie perfect positive, which would imply we are dealing with the same asset). Therefore VaR(A+B)=SQRT(VaR (A)^2 + VaR(B)^2). This implies the combined VaR of a portfolio with these two securities will be less than the sum of VaRs of the two individual securities.Thus Choice ‘c’ is the correct answer and the other choices are wrong.Q26. Under the standardized approach to calculating operational risk capital under Basel II, negative regulatory capital charges for any of the business units:  Should be ignored completely  Should be offset against positive capital charges from other business units  Should be included after ignoring the negative sign  Should be excluded from capital calculations According to Basel II, in any given year, negative capital charges (resulting from negative gross income) in any business line may offset positive capital charges in other business lines without limit. Therefore Choice ‘b’ is the correct answer.Q27. Which of the following cannot be used as an internal credit rating model to assess an individual borrower:  Distance to default model  Probit model  Logit model  Altman’s Z-score Altman’s Z-score, the Probit and the Logit models can all be used to assess the credit rating of anindividual borrower.There is no such model as the ‘distance to default model’, and therefore Choice ‘a’ is the correct answer.Q28. Which of the following should be included when calculating the Gross Income indicator used to calculate operational risk capital under the basic indicator and standardized approaches under Basel II?  Insurance income  Operating expenses  Fees paid to outsourcing service proviers  Net non-interest income Gross income is defined by Basel II (see para 650 of the Basel standard) as net interest income plusnet non- interest income. It is intended that this measure should: (i) be gross of any provisions (e.g. for unpaid interest); (ii) be gross of operating expenses, including fees paid to outsourcing service providers; (iii) exclude realised profits/losses from the sale of securities in the banking book; and (iv) exclude extraordinary or irregular items as well as income derived from insurance.What this means is that gross income is calculated without deducting any provisions or operating expenses from net interest plus non-interest income; and does not include any realised profits or losses from the sale of securities in the banking book, and also does not include any extraordinary or irregular item or insurance income.Therefore operating expenses are to be not to be deducted for the purposes of calculating gross income, and neither are any provisions. Profits and losses from the sale of banking book securities are not considered part of gross income, and so isn’t any income from insurance or extraordinary items.Of the listed choices, only net non-interest income needs to be included for gross income calculations, and the others are to be excluded. Therefore Choice ‘d’ is the correct answer. Try to remember the components of gross income from the definition above because in the exam the question may be phrased differently.Q29. Loss from a lawsuit from an employee due to physical harm caused while at work is categorized per Basel II as:  Employment practices and workplace safety  Execution delivery and process management  Unsafe working environment  Damage to physical assets Choice ‘a’ is the correct answer. Refer to the detailed loss event type classification under Basel II (see Annex 9 of the accord). You should know the exact names of all loss event types, and examples of each.Q30. Conditional VaR refers to:  expected average losses conditional on the VaR estimates not being exceeded  value at risk when certain conditions are satisfied  expected average losses above a given VaR estimate  the value at risk estimate for non-normal distributions Conditional VaR is the expected average losses above a given percentile, or a given VaR estimate at the given level of confidence. For example, if we know what the 99% VaR is, we still do not know what we can expect our losses to be if this VaR loss estimate were to be exceeded. Conditional VaR provides the answer to this question by providing an estimate of the average or expected losses beyond 99% mark. Therefore Choice ‘c’ is the correct answer and the other choices are mostly non-sensical.Q31. When building a operational loss distribution by combining a loss frequency distribution and a loss severity distribution, it is assumed that:I. The severity of losses is conditional upon the number of loss events II. The frequency of losses is independent from the severity of the losses III. Both the frequency and severity of loss events are dependent upon the state of internal controls in the bank  I, II and III  II  II and III  I and II When a operational loss frequency distribution (which, for example, may be based upon a Poisson distribution) and a loss severity distribution (for example, based upon a lognormal distribution), it is assumed that the frequency of losses and the severity of the losses are completely independent and do not impact each other. Therefore statement II is correct, and the others are not valid assumptions underlying the operational loss distribution.Q32. Fill in the blank in the following sentence:Principal component analysis (PCA) is a statistical tool to decompose a ____________ matrix into its principal components and is useful in risk management to reduce dimensions.  Covariance  Correlation  Volatility  Positive semi-definite PCA is a statistical tool that decomposes a positive semi-definite matrix into its principal components. The first few principal components explain nearly all the variation and other components can then be ignored as they are too small in the larger picture. PCA in risk management is applied to a positive semi-definite correlation or covariance matrix to reveal the principal components that cause the variation. By allowing a focus on a few components, PCA reduces dimensionality.While performing the math of PCA is unlikely to be asked in the PRMIA exam, you should remember that principal components have the additional property of being uncorrelated to each other which makes it useful as it is possible to vary one of the components without having to worry about the effect of that on the other components.Q33. Which of the following is not a credit event under ISDA definitions?  Restructuring  Obligation accelerations  Rating downgrade  Failure to pay According to ISDA, a credit event is an event linked to the deteriorating credit worthiness of an underlying reference entity in a credit derivative. The occurrence of a credit event usually triggers full or partial termination of the transaction and a payment from protection seller to protection buyer. Credit events include– bankruptcy,– failure to pay,– restructuring,– obligation acceleration,– obligation default and– repudiation/moratorium.A rating downgrade is not a credit event.Q34. Which of the following is a valid approach to determining the magnitude of a shock for a given risk factor as part of a historical stress testing exercise?I. Determine the maximum peak-to-trough change in the risk factor over the defined period of the historical event II. Determine the minimum peak-to-trough change in the risk factor over the defined period of the historical event III. Determine the total change in the risk factor between the start date and the finish date of the event regardless of peaks and troughs in between IV. Determine the maximum single day change in the risk factor and multiply by the number of days covered by the stress event  II and IV  I and III  IV only  I, II and IV Stress events rarely play out in a well defined period of time, and looking back it is always difficult to put exact start and end dates on historical stress events. Even after that is done, the question arises as to what magnitude of a change in a particular risk factor (for example interest rates, spreads, or exchange rates) are reasonable to consider for the purposes of the stress test.Statements I and III correctly identify the two approaches that are acceptable and used in practice – the risk manager can either take the maximum adverse move – from peak to trough – in the risk factor, or alternatively he or she could consider the change in the risk factor from the start of the event to the end as defined for the purposes of the stress test. Between the two, the approach mentioned in statement III is considered slightly superior as it produces more believable shocks.Statement II is incorrect because we never want to consider the minimum, and statement IV is not correct as it is likely to generate a shock of a magnitude that is not plausible. Therefore Choice ‘b’ is the correct answer.Q35. Which of the following are valid methods for selecting an appropriate model from the model space for severity estimation:I. Cross-validation methodII. Bootstrap methodIII. Complexity penalty methodIV. Maximum likelihood estimation method  II and III  I, II and III  I and IV  All of the above Once we have a number of distributions in the model space, the task is to select the “best” distribution that is likely to be a good estimate of true severity. We have a number of distributions to pick from, an empirical dataset (from internal or external losses), and we can estimate the parameters for the different distributions.We then have to decide which distribution to pick, and that generally requires considering both approximation and fitting errors.There are three methods that are generally used for selecting a model:1. The cross-validation method: This method divides the available data into two parts – the training set, and the validation set (the validation set is also called the ‘testing set’). Parameter estimation for each distribution is done using the training set, and differences are then calculated based on the validation set. Though the temptation may be to use the entire data set to estimate the parameters, that is likely to result in what may appear to be an excellent fit to the data on which it is based, but without any validation. So we estimate the parameters based on one part of the data (the training set), and check the differences we get from the remaining data (the validation set).2. Complexity penalty method: This is similar to the cross-validation method, but with an additional consideration of the complexity of the model. This is because more complex models are likely to produce a more exact fit than simpler models, this may be a spurious thing – and therefore a ‘penalty’ is added to the more complex models as to favor simplicity over complexity. The ‘complexity’ of a model may be measured by the number of parameters it has, for example, a log-normal distribution has only two parameters while a body-tail distribution combining two different distributions may have many more.3. The bootstrap method: The bootstrap method estimates fitting error by drawing samples from the empirical loss dataset, or the fit already obtained, and then estimating parameters for each draw which are compared using some statistical technique. If the samples are drawn from the loss dataset, the technique is called a non- parametric bootstrap, and if the sample is drawn from an estimated model distribution, it is called a parametric bootstrap.4. Using goodness of fit statistics: The candidate fits can be compared using MLE based on the KS distance, for example, and the best one selected. Maximum likelihood estimation is a technique that attempts to maximize the likelihood of the estimate to be as close to the true value of the parameter. It is a general purpose statistical technique that can be used for parameter estimation technique, as well as for deciding which distribution to use from the model space.All the choices listed are the correct answer.Q36. Which of the following assumptions underlie the ‘square root of time’ rule used for computing VaR estimates over different time horizons?I. the portfolio is static from day to dayII. asset returns are independent and identically distributed (i.i.d.)III. volatility is constant over timeIV. no serial correlation in the forward projection of volatilityV. negative serial correlations exist in the time series of returnsVI. returns data display volatility clustering  III, IV, V and VI  I, II, V and VI  I, II, III and IV  I and II The square root of time rule can be used to convert, say a 1-day VaR to a 10-day VaR, by multiplying the known number by the square root of time to get the VaR over a different time horizon. However, there are key assumptions that underlie the application of this rule, and statements I to IV correctly state those assumptions.Statements V and VI are not correct, because the application of the square root of time rule requires the absence of serial correlations, and also the absence of volatility clustering (ie independence). Therefore Choice ‘c’ is the correct answer.The square root of time rule is also applied to convert volatility or standard deviation for one period to the volatility for a different time period. Remember that VaR is just a multiple of volatility, and therefore the assumptions that apply to the square root of time rule for VaR also apply to the same rule when used in the context of volatilities or standard deviation.Q37. If P be the transition matrix for 1 year, how can we find the transition matrix for 4 months?  By calculating the cube root of P  By numerically calculating a matrix M such that M x M x M is equal to P  By dividing P by 3  By calculating the matrix P x P x P Assuming time invariance and the Markov property, it is easy to calculate the transition matrix for any time period as P^n, where P is the given transition matrix for one period and n the number oftime periods that we need to compute the new transition matrix for.However, when the new time period is less than the time period the matrix is available for, the only way to deriving a transition matrix for a partial period is to numerically calculate a matrix M such that M^n = P.Therefore Choice ‘b’ is the correct answer. Taking cube roots of a matrix is not a possible operation, dividing by 3 gives a matrix meaningless in this context, and P x P x P will give us the transition matrix for 3 years, not 1/3rd of a year.Q38. According to Basel II’s definition of operational loss event types, losses due to acts by third parties intended to defraud, misappropriate property or circumvent the law are classified as:  Internal fraud  Execution delivery and system failure  External fraud  Third party fraud Choice ‘c’ is the correct answer. Refer to the detailed loss event type classification under Basel II (see Annex 9 of the accord). You should know the exact names of all loss event types, and examples of each.Q39. For an equity portfolio valued at V whose beta is #, the value at risk at a 99% level of confidence is represented by which of the following expressions? Assume # represents the market volatility.  2.326 x # x V x #  1.64 x V x # / #  1.64 x # x V x #  2.326 x V x # / # For the PRM exam, it is important to remember the z-multiples for both 99% and 95% confidence levels (these are 2.33 and 1.64 respectively).The value at risk for an equity portfolio is its standard deviation multiplied by the appropriate z factor for the given confidence level. If we knew the standard deviation, VaR would be easy to calculate. The standard deviation can be derived using a correlation matrix for all the stocks in the portfolio, which is not a trivial task. So we simplify the calculation using the CAPM and essentially say that the standard deviation of the portfolio is equal to the beta of the portfolio multiplied by the standard deviation of the market.Therefore VaR in this case is equal to Beta x Mkt Std Dev x Value x z-factor, and therefore Choice ‘a’ is the correct answer.Q40. A risk analyst analyzing the positions for a proprietary trading desk determines that the combined annual variance of the desk’s positions is 0.16. The value of the portfolio is $240m. What is the 10-day stand alone VaR in dollars for the desk at a confidence level of 95%? Assume 250 trading days in a year.  12595200  157440000  6297600  31488000 The z value at the 95% confidence level is 1.64. Since the variance is 0.16, the annual volatility is 40%.Therefore the daily volatility is 40% x #10/250 = 8%. The VaR therefore is 8% x 1.64 x $240m = $31,488,000Q41. When estimating the risk of a portfolio of equities using the portfolio’s beta, which of the following is NOT true:  relies upon the single factor CAPM model  use of the beta assumes that the portfolio is diversified enough so that the specific risks of the individual stocks offset each other  explicitly considers specific risk inherent in the portfolio for risk calculations  using the beta significantly eases the computational burden of calculating risk Using the beta for VaR calculations is a significant simplification based on the CAPM and the assumption that any specific risks are diversified away. The one thing a risk model based on the CAPM does not consider is the specific risk of individual stocks, because, as mentioned, these are considered to be offsetting each other so that the portfolio only carries market risk reflected in the beta.Therefore Choice ‘c’ is not true and therefore the correct answer.Q42. For credit risk calculations, correlation between the asset values of two issuers is often proxied with:  Credit migration matrices  Transition probabilities  Equity correlations  Default correlations Asset returns are relevant for credit risk models where a default is related to the value of the assets of the firm falling below the default threshold. When assessing credit risk for portfolios with multiple credit assets, it becomes necessary to know the asset correlations of the different firms. Since this data is rarely available, it is very common to approximate asset correlations using equity prices. Equity correlations are used as proxies for asset correlation, therefore Choice ‘c’ is the correct answer. Loading … Latest PRMIA 8011 Practice Test Questions: https://www.validbraindumps.com/8011-exam-prep.html --------------------------------------------------- Images: https://free.validbraindumps.com/wp-content/plugins/watu/loading.gif https://free.validbraindumps.com/wp-content/plugins/watu/loading.gif --------------------------------------------------- --------------------------------------------------- Post date: 2025-01-22 16:10:31 Post date GMT: 2025-01-22 16:10:31 Post modified date: 2025-01-22 16:10:31 Post modified date GMT: 2025-01-22 16:10:31