Referensi
http://www.socialresearchmethods.nettutorialFlynncluster.htm
http://www.socialresearchmethods.net/tutorial/Flynn/factor.htm
http://www.answers.com/topic/causal-analysis
http://www.itl.nist.gov/div898/handbook/pmc/section4/pmc41.htm
http://www.statisticalforecasting.com/forecasting-methods.php
http://probability.ca/jeff/ftpdir/anjali0.pdf
https://admin.acrobat.com/_a16103949/p29523004/
http://en.wikipedia.org/wiki/Urban_planning#Process
Plannig Process
Urban Planning Process
The traditional planning process focused on top-down processes where the urban planner created the plans. The planner is usually skilled in either surveying, engineering or architecture, bringing to the town planning process ideals based around these disciplines. They typically worked for national or local governments.[citation needed]
Changes to the planning process[1] over past decades have witnessed the metamorphosis of the role of the urban planner in the planning process. More citizens calling for democratic planning & development processes have played a huge role in allowing the public to make important decisions as part of the planning process. Community organizers and social workers are now very involved in planning from the grassroots level.[25]
Developers too have played huge roles in influencing the way development occurs, particularly through project-based planning. Many recent developments were results of large and small-scale developers who purchased land, designed the district and constructed the development from scratch. The Melbourne Docklands, for example, was largely an initiative pushed by private developers who sought to redevelop the waterfront into a high-end residential and commercial district.
Recent theories of urban planning, espoused, for example by Salingaros see the city as a adaptive system that grows according to process similar to those of plants. They say that urban planning should thus take its cues from such natural processes.
Multivariate Statistics
Factor Analysis
Definition and an example
Factor analysis is:
a statistical approach that can be used to analyze interrelationships among a large number of variables and to explain these variables in terms of their common underlying dimensions (factors). The statistical approach involving finding a way of condensing the information contained in a number of original variables into a smaller set of dimensions (factors) with a minimum loss of information (Hair et al., 1992).
Factor analysis could be used to verify your conceptualization of a construct of interest. For example, in many studies, the construct of "leadership" has been observed to be composed of "task skills" and "people skills." Let's say that, for some reason, you are developing a new questionnaire about leadership and you create 20 items. You think 10 will reflect "task" elements and 10 "people" elements, but since your items are new, you want to test your conceptualization.
Before you use the questionnaire on your sample, you decide to pretest it (always wise!) on a group of people who are like those who will be completing your survey. When you analyze your data, you do a factor analysis to see if there are really two factors, and if those factors represent the dimensions of task and people skills. If they do, you will be able to create two separate scales, by summing the items on each dimension. If they don't, well it's back to the drawing board.
What you need in order to do a factor analysis
Remember, factor analysis requires that you have data in the form of correlations, so all of the assumptions that apply to correlations, are relevent here.
Types of factor analysis: Two main types:
• Principal component analysis -- this method provides a unique solution, so that the original data can be reconstructed from the results. It looks at the total variance among the variables, so the solution generated will include as many factors as there are variables, although it is unlikely that they will all meet the criteria for retention. There is only one method for completing a principal components analysis; this is not true of any of the other multidimensional methods described here.
• Common factor analysis -- this is what people generally mean when they say "factor analysis." This family of techniques uses an estimate of common variance among the original variables to generate the factor solution. Because of this, the number of factors will always be less than the number of original variables. So, choosing the number of factors to keep for further analysis is more problematic using common factor analysis than in principle components.
Steps in conducting a factor analysis
There are four basic factor analysis steps:
• data collection and generation of the correlation matrix
• extraction of initial factor solution
• rotation and interpretation
• construction of scales or factor scores to use in further analyses
Extraction of an initial solution
The output of a factor analysis will give you several things. The table below shows how output helps to determine the number of components/factors to be retained for futher analysis. One good rule of thumb for determining the number of factors, is the "eigenvalue greater than 1" criteria. For the moment, let's not worry about the meaning of eigenvalues, however this criteria allows us to be fairly sure that any factors we keep will account for at least the variance of one of the variables used in the analysis. However, when applying this rule, keep in mind that when the number of variables is small, the analysis may result in fewer factors than "really" exist in the data, while a large number of variables may produce more factors meeting the criteria than are meaningful. There are other criteria for selecting the number of factors to keep, but this is the easiest to apply, since it is the default of most statistical computer programs.
Note that the factors will all be orthogonal to one another, meaning that they will be uncorrelated.
Remember that in our hypothetical leadership example, we expected to find two factors, representing task and people skills. The first output is the results of the extraction of components/factors, which will look something like this:
Table #1: Sample extraction of components/factors
Factors Eigenvalue % of variance Cumulative % of variance
1 2.6379 44.5 37.6
2 1.9890 39.3 83.8
3 0.8065 8.4 92.2
4 0.6783 7.8 100.0
Interpreting your results
Since the first two factors were the only ones that had eigenvalues > 1, the final factor solution will only represent 83.8% of the variance in the data. The loadings listed under the "Factor" headings represent a correlation between that item and the overall factor. Like Pearson correlations, they range from -1 to 1. The next panel of factor analysis output might look something like this:
Table #2: Unrotated Factor Matrix
Variables Factor 1 Factor 2 Communality
Ability to define problems .81 -.45 .87
Ability to supervise others .84 -.31 .79
Ability to make decisions .80 -.29 .90
Ability to build consensus .89 .37 .88
Ability to facilitate decision-making .79 .51 .67
Ability to work on a team .45 .43 .72
This table shows the difficulty of interpreting an unrotated factor solution. All of the most significant loadings (highlighted) are on Factor #1. This is a common pattern. One way to obtain more intepretable results is to rotate your solution. Most computer packages use varimax rotation, although there are other techniques.
Below is an example of what the factors might look like if we rotated them. Notice that the loadings are distributed between the factors, and that the results are easier to interpret.
Table #3: Rotated Factor Matrix
Variables Factor 1 Factor 2 Communality
Ability to define problems .68 .17 .87
Ability to supervise others .87 .24 .79
Ability to make decisions .65 .07 .90
Ability to build consensus .16 .76 .88
Ability to facilitate decision-making .30 .83 .67
Ability to work on a team .19 .69 .72
Naming the factors
Now we have a highly interpretable solution, which represents almost 90% of the data. The next step is to name the factors. There are a few rules suggested by methodologists:
Factor names should
• be brief, one or two words
• communicate the nature of the underlying construct
Look for patterns of similarity between items that load on a factor. If you are seeking to validate a theoretical structure, you may want to use the factor names that already exist in the literature. Otherwise, use names that will communicate your conceptual structure to others. In addition, you can try looking at what items do not load on a factor, to determine what that factor isn't. Also, try reversing loadings to get a better interpretation.
Using the factor scores
It is possible to do several things with factor analysis results, but the most common are to use factor scores, or to make summated scales based on the factor structure.
Because the results of a factor analysis can be strongly influenced by the presence of error in the original data, Hait, et al. recommend using factor scores if the scales used to collect the original data are "well-constructed, valid, and reliable" instruments. Otherwise, they suggest that if the scales are "untested and exploratory, with little or no evidence of reliability or validity," summated scores should be constructed. An added benefit of summated scores is that if they are to be used in further analysis, they preserve the variation in the data.
Cluster Analysis
Definition
Cluster Analysis is a multivariate analysis technique that seeks to organize information about variables so that relatively homogenenous groups, or "clusters," can be formed. The clusters formed with this family of methods should be highly internally homogenous (members are similar to one another) and highly externally heterogenous (members are not like members of other clusters.
Although cluster analysis is relatively simple, and can use a variety of input data, it is a relatively new technique and is not supported by a comprehensive body of statistical literature. So, most of the guidelines for using cluster analysis are rules of thumb and some authors caution that researchers should use cluster analysis
What you need in order to do a cluster analysis
Like MDS, cluster analysis can accept a wide variety of input data. While these are generally called "similarity" measures, they can also be termed "proximity," "resemblance," or "association." Some authors recommend using standardized data, since you may be clustering items measured on different scales, and standardizing will give you a "unit free" measure.
Steps in conducting a cluster analysis
There are four basic cluster analysis steps:
• data collection and selection of the variables for analysis
• generation of a similarity matrix
• decision about number of clusters and interpretation
• validation of cluster solution
Output of a cluster analysis
The main outcome of a cluster analysis is a dendrogram, which is also called a tree diagram. For the leadership example described in the Factor Analysis page, a tree plot might look something like this:
You can see that the two dimensions of task and people skills also emerge from this analysis, the difference is that you can see which variables are closer to the others, based on which link first.
How many clusters to keep or "Where to cut the tree?"
Like the other techniques, cluster analysis presents the problem of how many factors, or dimensions, or clusters to keep. One rule of thumb for this is to choose a place where the cluster structure remains stable for a long distance. Some other possibilities are to look for cluster groupings that agree with existing or expected structures, or to replicate the analysis on subsets of the data to see if the structures emerge consistently.
Time Series Analysis
Definition Definition of Time Series: An ordered sequence of values of a variable at equally spaced time intervals. Time series occur frequently when looking at industrial data Applications: The usage of time series models is twofold:
• Obtain an understanding of the underlying forces and structure that produced the observed data
• Fit a model and proceed to forecasting, monitoring or even feedback and feedforward control.
Time Series Analysis is used for many applications such as:
• Economic Forecasting
• Sales Forecasting
• Budgetary Analysis
• Stock Market Analysis
• Yield Projections
• Process and Quality Control
• Inventory Studies
• Workload Projections
• Utility Studies
• Census Analysis
and many, many more... There are many methods used to model and forecast time series Techniques: The fitting of time series models can be an ambitious undertaking. There are many methods of model fitting including the following:
• Box-Jenkins ARIMA models
• Box-Jenkins Multivariate Models
• Holt-Winters Exponential Smoothing (single, double, triple)
The user's application and preference will decide the selection of the appropriate technique. It is beyond the realm and intention of the authors of this handbook to cover all these methods. The overview presented here will start by looking at some basic smoothing techniques:
• Averaging Methods
• Exponential Smoothing Techniques.
Causality Analysis
The search for the cause or causes of particular events and objects. A causal factor is a variable which causes change in another variable. Statistical techniques which test the strength of a postulated link between two variables, such as the volume of pedestrian flows and distance from the CBD, include the Student's t-test and the chi-squared test. A causal model is basically a hypothesis about the relationship between pairs of variables, but even in the most simple model, with three variables, there will be many different versions of their relationships. The underlying concept of a causal model is multiple causality. Thus, areas of deprivation within the city may be linked with age, social class, and ethnic origin, but these variables are also interconnected.
Forecasting Methods
Multiple Regression Analysis: Used when two or more independent factors are involved-widely used for intermediate term forecasting. Used to assess which factors to include and which to exclude. Can be used to develop alternate models with different factors.
Nonlinear Regression: Does not assume a linear relationship between variables-frequently used when time is the independent variable.
Trend Analysis: Uses linear and nonlinear regression with time as the explanatory variable-used where pattern over time.
Decomposition Analysis: Used to identify several patterns that appear simultaneously in a time series-time consuming each time it is used-also used to deseasonalize a series
Moving Average Analysis: Simple Moving Averages-forecasts future values based on a weighted average of past values-easy to update.
Weighted Moving Averages: Very powerful and economical. They are widely used where repeated forecasts required-uses methods like sum-of-the-digits and trend adjustment methods.
Adaptive Filtering: A type of moving average which includes a method of learning from past errors-can respond to changes in the relative importance of trend, seasonal, and random factors.
Exponential Smoothing: A moving average form of time series forecasting-efficient to use with seasonal patterns- easy to adjust for past errors-easy to prepare follow-on forecasts-ideal for situations where many forecasts must be prepared-several different forms are used depending on presence of trend or cyclical variations.
Hodrick-Prescott Filter: This is a smoothing mechanism used to obtain a long term trend component in a time series. It is a way to decompose a given series into stationary and nonstationary components in such a way that there sum of squares of the series from the nonstationary component is minimum with a penalty on changes to the derivatives of the nonstationary component.
Modeling and Simulation: Model describes situation through series of equations-allows testing of impact of changes in various factors-substantially more time-consuming to construct-generally requires user programming or purchase of packages such as SIMSCRIPT. Can be very powerful in developing and testing strategies otherwise non-evident.
Certainty models give only most likely outcome-advanced spreadsheets can be utilized to do "what if" analysis-often done e.g.; with computer-based spreadsheets.
Probabilistic Models Use Monte Carlo simulation techniques to deal with uncertainty-gives a range of possible outcomes for each set of events.
Forecasting error: All forecasting models have either an implicit or explicit error structure, where error is defined as the difference between the model prediction and the "true" value. Additionally, many data snooping methodologies within the field of statistics need to be applied to data supplied to a forecasting model. Also, diagnostic checking, as defined within the field of statistics, is required for any model which uses data.
Using any method for forecasting one must use a performance measure to assess the quality of the method. Mean Absolute Deviation (MAD), and Variance are the most useful measures. However, MAD doesn't lend itself to further use making inferences, but that the standard error does. For the error analysis purposes variance is preferred since variances of independent (uncorrelated) errors are additive. MAD is not additive.
he traditional planning process focused on top-down processes where the urban planner created the plans. The planner is usually skilled in either surveying, engineering or architecture, bringing to the town planning process ideals based around these disciplines. They typically worked for national or local governments.[citation needed]
Changes to the planning process[1] over past decades have witnessed the metamorphosis of the role of the urban planner in the planning process. More citizens calling for democratic planning & development processes have played a huge role in allowing the public to make important decisions as part of the planning process. Community organizers and social workers are now very involved in planning from the grassroots level.[25]
Developers too have played huge roles in influencing the way development occurs, particularly through project-based planning. Many recent developments were results of large and small-scale developers who purchased land, designed the district and constructed the development from scratch. The Melbourne Docklands, for example, was largely an initiative pushed by private developers who sought to redevelop the waterfront into a high-end residential and commercial district.
Recent theories of urban planning, espoused, for example by Salingaros see the city as a adaptive system that grows according to process similar to those of plants. They say that urban planning should thus take its cues from such natural processes.
Decision Theory
Decision theory as the name would imply is concerned with the process of making
decisions. The extension to statistical decision theory includes decision making in the
presence of statistical knowledge which provides some information where there is
uncertainty. The elements of decision theory are quite logical and even perhaps intuitive.
The classical approach to decision theory facilitates the use of sample information in
making inferences about the unknown quantities. Other relevant information includes
that of the possible consequences which is quantified by loss and the prior information
which arises from statistical investigation. The use of Bayesian analysis in statistical
decision theory is natural. Their unification provides a foundational framework for
building and solving decision problems. The basic ideas of decision theory and of
decision theoretic methods lend themselves to a variety of applications and computational
and analytic advances.
This initial part of the report introduces the basic elements in (statistical) decision theory
and reviews some of the basic concepts of both frequentist statistics and Bayesian
analysis. This provides a foundational framework for developing the structure of
decision problems. The second section presents the main concepts and key methods
involved in decision theory. The last section of Part I extends this to statistical decision
theory – that is, decision problems with some statistical knowledge about the unknown
quantities. This provides a comprehensive overview of the decision theoretic framework.
Senin, 09 Februari 2009
Langganan:
Posting Komentar (Atom)
Tidak ada komentar:
Posting Komentar