Está en la página 1de 13

1.

DEFINITION OF TERMS Gross Domestic Product The Gross Domestic Product (GDP) is the market value of goods and services produced in an economy during a period of time irrespectively of the nationality of the people who produce the goods and services. It is calculated without making deduction for depreciation. It is obtained by valuing outputs of goods and services market prices and then aggregated. It should be noted that only goods used for final consumption or investment goods (capital) or changes in stocks are included.

Gross Fixed Capital Formation (GFCF) Gross Fixed Capital Formation is expenditure on fixed assets (such as building, machinery) either for replacement or adding to the stock of existing fixed assets. Usually the amount of new issues (in the capital market) is used to determine the relationship between the capital market and GFCF.

This shows the amount of domestic and foreign investment financed from domestic output compromising public and private savings. It is gross domestic investment plus the net exports of goods and non-factor services.

Gross National Savings (GNS) This shows the amount of domestic and foreign financed from domestic output comprising public and private savings. It is gross domestic investment plus the net exports of goods and non-factor services.

Consumer Price Index This Index number is a weighted average of a number of statistical observations of some economic attribute, as a percentage of a similar weighted average calculated at an earlier or base period. The most familiar being retail price index, or cost of living index otherwise known as Consumer Price Index (CPI).

Consumer Price Index (CPI) measures changes in the average level of prices over a period of time. In other words, prices indicator of what is happening to prices consumers are paying for items purchased. With a given starting point or base period which is usually taken as 100, the CPI can be used to compare current period consumer prices with those in the base period. For instance, a CPI of 125 indicates that consumer prices generally have gone up by 25% above the base period for the same basket of items.

Price-Earning Ratio (P/E) The price-earnings ratio is derived by dividing the prevailing market price of equity by the Earnings Per Share (EPS). The earning per share itself is derived by dividing the Profit After Tax (PAT) by the total number of shares outstanding of a company.

The price-earning ratio depicts how covered by earning is an equity investment in any company. It may be based on actual earning of a company or on a project figure. It gives an idea of the period it takes an investor to recoup his/her investment provided the present earning trends of the company is maintained. The lower the price-earning ratio,

the better for the investor. The perceived investment risk of an equity is usually focused through its price-earning ratio. A company with good and stable financial and dividend payment track record could have a high price-earning ratio while poor performing company would likely record a low price-earning ratio.

Dividend Yield Dividend is the part of earnings (if declared) that is paid to the shareholders as the share of the profit of the company. Dividend is not compulsory to be paid. The entire profit after tax can be ploughed back for the growth of the Company. Dividend Yield (DY) is the ratio of dividend per share to the market price of an equity. It is however, usually expressed in percentage. For instance, if company A paid a dividend of N2.00 per share and market price of the share of the company is N100.00, then the Divivdend Yield is calculated thus;

DY =

Dividend per share Market Price of the Share = 2 x 100 = 2%

x 100

The usefulness of this is that it enables one to compare with interest rate on Savings and Deposits in the bank. A high dividend yield will attract investment into the stock market.

Trading Value This is the amount of deals transacted on a Stock Exchange during a given period. The trading value is derived by multiplying number of shares of a particular company that

changed hands by the prevailing price. It should be noted that shares of a company can be bought at different prices on any trading day. The price at which the last transaction is carried out is the closing price or current market price. The sum of all the transactions in a particular day becomes the total value for the day for such company. The aggregate or total sum of all transactions from all companies gives the total value traded on Stock Exchange for a day or a month or a year.

A rapid increase in the trading volume/value of security on an exchange is indicative of interest in the security or the market. The trading volume and value is an important indicator of the level of liquidity, the efficiency of the infrastructural facilities of a stock market and the investment culture of the populace. Liquidity is the ease with which a security is converted into cash. The activities of portfolio managers, pension funds mangers, and unit trust managers are capable of stimulating trading and improving liquidity. The level awareness of investors and the number of listings could have favourable impact on the trading volume. The market float which is the proportion of number of shares available for trading is another influencing factor on trading activities.

Market Capitalization Equity market capitalization is perhaps the most important criterion in assessing the size of a capital market. It is a function of market price and size of paid-up capital of listed Companies. For individual company, the market capitalization is the product of market price and number of outstanding shares i.e. Number of Outstanding Shares X market

price. The sum total of market capitalization for all capitalization listed equities on an Exchange gives the aggregate equity market capitalization of a stock market.

For individual quoted Companies, the size of market capitalization is an indicator of the market value (i.e. investors perception or assessment) of the Company. Thus market capitalization does fluctuate with movements in the market price of a company equity and changes in outstanding shares. For instance, an increase in the outstanding shares of company with market price either held constant or increased, would enhance the market capitalization of a company. Generally, the aggregate market capitalization of a stock market would show an upward trend in a bullish market while the converse would happen in a bearish market situation.

Turnover Ratio This is the ratio of trading value to the market capitalization and usually expressed in percentage. This is another method of assessing how active or how liquid a stock market is. The portfolio investors and indeed other investors consider how fast or how easy he can buy and sell securities when the need arises before he takes a decision to invest in a stock market. Investor at the primary market also considers as very important, what happens next after he might have bought shares in primary market and later needs his money to do something else or to re-invest in another area. Therefore his decision to participate at primary market would be based on his ability to dispose off the security at market.

Historical Variables These variables are discernable from the annual reports accounts. They include: a. End-of-year market price share. b. Total dividends paid per share (adjusted to number of shares outstanding at year end). c. Reported earnings per share (adjusted o exclude non-recurring items). d. Average dividend pay-out per ratio.

Expectational Variables These variables are futuristic in nature. They include: a. Forecasts of short-term earnings growth. b. Estimates of normal earning power of each company and c. Estimates of the instability of the earnings stream.

Industrial Variables These variables are related to the nature of the business and hence perception of the investing public and government. They include; a. Extent of government regulation. b. Vulnerability to government action. c. Management resilience.

Stochastic Trend Informally, a stochastic trend is defined as the part of time series which is expected to persist into the indefinite future, yet it is not predictable from the past.

Formally, a series is said to contain a stochastic trend if it is non-stationary in levels even after removing a linear trend, whereas the process is stationary in differences (Bernard and Durlauf, 1991).

Bayesian Error The Bayesian error is the difference between the true or full information expectation of the fundamentals and the subjective expectation.

Autocorrelation (Serial Correlation) It is possible to attempt correlate values of a variable X at a certain time with corresponding values of x at earlier times. Such correlation is often called autocorrelation (Spiegel and Stephens, 1999).

Also, according to Horngren, Forster and Datar (1997), serial correlation (also called autocorrelation) means that there is a systematic pattern in the sequence of residuals such that the residual in observation conveys information about the residuals in nt1, nt2 and so on. In time series data, inflation is a common cause of autocorrelation because it causes costs (and hence residuals) to be related over time.

Autocorrelation can also occur in cross-sectional data.

Residuals

The vertical of the observed value Y from the regression line, estimate y is called residual term, disturbance term or error term, = Y- y.

Homoscedasticity (constant variance) The assumption of constant variance implies that residual terms are unaffected by the level of the independent variables. The assumption also implies that there is a uniform scatter or dispersion of the data points about the regression line. The scatter diagram is the easiest way to check for constant variance. Constant variance is also known as homoscedasticity.

Heteroscedasticity In a regression analysis, once the scatter is not uniform around the line of best fit, that is once there is a violation of the assumption of constant variance, we refer to the situation as heteroscedasticity.

Heteroscedasticity does not affect the accuracy of the regression estimates a and b. It does not however, reduce the reliability of the estimates of the standard errors and thus affects the precision with which inferences can be drawn.

Multicollinearity Multicollineatrity (also known as simultaneous relationship) exists when two or more independent variables are highly correlated with each other. Generally, users of

regression analysis believe that a coefficient of correlation between independent variables greater than 0.70 indicates multicollinearity. Multicollinearity increases the standard errors of the coefficients of the individual variables. The result is that there is greater uncertainty about the underlying value of the coefficients of the individual independent variables. That is, variables that are economically and statistically significant will appear insignificant (Horngren et al, 1997). Validity Extent to which a measure or set of measures correctly represents the concept of study the degree to which it is free from any systematic nonrandom error. Validity is concerned with how well the concept is defined by measure(s), whereas reliability relates to the consistency of the measure(s). Factor Analysis This is an interdependence technique whose primary purpose is to define the underlying structure among the variables in the analysis. The statistical approach involving finding a way of condensing the information contained in a number of original variables into a smaller set of dimensions (factors) with a minimum loss of information (Hair et al, 1992). Factor analysis provides the tools for analyzing the structure of the interrelationship (correlations) among a large number of variables (e.g., test scores, test items, questionnaire responses) by defining set of variables that are high interrelated, known as factors. These groups of variables (factors), which are by definition highly intercorrelated are assumed to represent dimensions within the data. If one is only concerned with reducing the number of variables, then the dimension can guide in creating new composite measures. However, if one has a conceptual basis for

understanding the relationships between variables, then the dimension may actually have meaning for what the collectively represent. In latter case, these dimensions may correspond to concepts that cannot be adequately described by a single measure. Bartlett test of Sphericity This is a statistical test for the presence of correlations among several variables. Is a statistical test for the overall significance of all correlations within a correlation matrix? It provides the statistical significance that the correlation has significant correlations among at least some of the variables. It tests the null hypothesis that the correlation matrix is an identity matrix. An identity matrix is matrix in which all the diagonal elements are 1 and all of diagonal elements are 0. However, increasing the sample size causes the Bartlett test to become more sensitive in detecting correlations among the variables. Kaiser-Meyer-Olkin Measure of Sampling Adequacy (KMO) The Kaiser-Meyer-Olkin measure of sampling adequacy tests whether the partial correlations among variables are small. The Kaiser-Meyer-Olkin measure of sampling adequacy is an index for comparing the magnitudes of the observed correlation coefficients to the magnitudes of the partial correlation coefficients. KMO measures the sampling adequacy which should be greater than 0.5 for a satisfactory factor analysis to proceed. A value of 0.6 is a suggested minimum (Hair et al, 1992). Large values for the KMO measure indicate that a factor analysis of the variables is a good idea.

Principal Component Analysis (PCA)

This is a statistical technique for data reduction. This method provides a unique solution, so that the original data can be reconstructed from the results. Unlike factor analysis, principal components analysis is not usually used to identify underlying latent variables. Hence, the loadings onto the components are not interpreted as factors in a factor analysis would be. Principal component analysis is a technique that requires a large sample size. Principal component analysis is based on the correlation matrix of the variables involved, and correlations usually need a large sample size before they stabilize. Tabachnick and Fidell (2001) cite in Comrey and Lees(1992) advise regarding sample size: 50 cases is very poor, 100 is poor, 200 is fair, 300 is good, 500 is very good, and 1000 or more is excellent. As a rule of thumb, a bare minimum of 10 observations per variable is necessary to avoid computational difficulties. Eigen Values This is the column of square loadings for a factor; also referred to as the latent root. It represents the amount of variance accounted for by a factor. Component Analysis This is a factor model in which the factors are based on the total variance. With component analysis, unities (1s) are used in the diagonal of the correlation matrix; this procedure computationally implies that all the variance is common or shared.

Regression Analysis Regression analysis includes any techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables. More specifically, regression analysis helps us understand

how the typical value of the dependent variable changes when any one of the independent variables is varied, while the other independent variables are held fixed. Most commonly, regression analysis estimates the conditional expectation of the dependent variable given the independent variables that is, the average value of the dependent variable when the independent variables are held fixed. Less commonly, the focus is on a quantile, or other location parameter of the conditional distribution of the dependent variable given the independent variables. In all cases, the estimation target is a function of the independent variables called the regression function. In regression analysis, it is also of interest to characterize the variation of the dependent variable around the regression function, which can be described by a probability distribution. Stepwise Regression In statistics, stepwise regression includes regression models in which the choice of predictive variables is carried out by an automatic procedure. Usually, this takes the form of a sequence of F-tests, but other techniques are possible, such as t-tests, adjusted Rsquare, Akaike information criterion, Bayesian information criterion, Mallows' Cp, or false discovery rate. One of the main issues with stepwise regression is that it searches a large space of possible models. Hence it is open for overfitting the data. In other words, stepwise regression will often fit much better insample than it does on new out of sample data. This problem can be mitigated if the criteria for adding (or deleting) a variable is stiff enough. The key line in the sand is at what can be thought of as the Bonferroni point: namely how significant the best spurious variable should be based on chance alone. On a

t-statistic scale, this occurs at about

. Unfortunately, this means that many

variables which actually carry signal will not be included. This fence turns out to be the

right tradeoff between over-fitting and missing signal. If we look at the risk of different cutoffs, then using this bound will be within a factor of the best possible risk.

Any other cut off will end up having a larger such risk inflation. The main approaches are:

Forward selection, which involves starting with no variables in the model, trying out the variables one by one and including them if they are 'statistically significant'.

Backward elimination, which involves starting with all candidate variables and testing them one by one for statistical significance, deleting any that are not significant.

Methods that are a combination of the above, testing at each stage for variables to be included or excluded. Multiple Regression Multiple regression is a flexible method of data analysis that may be appropriate whenever a quantitative variable (the dependent or criterion variable) is to be examined in relationship to any other factors (expressed as independent or predictor variables). Relationships may be nonlinear, independent variables may be quantitative or qualitative, and one can examine the effects of a single variable or multiple variables with or without the effects of other variables taken into account (Cohen, Cohen, West, & Aiken, 2003).

También podría gustarte