Senin, 09 Februari 2009

Tugas MAP 15407079 Dwi Kartika Ayu

Dwi Kartika Ayu 15407079 (Factor Analysis, Cluster Analysis, Hierarchical Analysis, Causality Analysis, Decision Theory, Time Series Analysis)

Analisis Data Kualitatif
Analisis kualitatif adalah aktivitas intensive yang memerlukan pengertian yang mendalam, kecerdikan, kreativitas, kepekaan konseptual, dan pekerjaan berat. Analisa kualitatif tidak berproses dalam suatu pertunjukan linier dan lebih sulit dan kompleks dibanding analisis kuantitatif sebab tidak diformulasi dan distandardisasi.
Analisis Kualitatif: Pertimbangan-pertimbangan Umum
Tujuan dari analisis data, dengan mengabaikan jenis data yang dimiliki dan mengabaikan tradisi yang sudah dipakai pada koleksinya, apakah untuk menentukan beberapa pesanan dalam jumlah besar informasi sehingga data dapat disintesis, ditafsirkan, dan dikomunikasikan. Walaupun tujuan utama dari kedua data kualitatif dan kuantitatif adalah untuk mengorganisir, menyediakan struktur, dan memperoleh arti dari data riset. Satu perbedaan penting adalah, di dalam studi-studi kualitatif, pengumpulan data dan analisis data pada umumnya terjadi secara serempak, pencarian konsep-konsep dan tema-tema penting mulai dari pengumpulan data dimulai.
1. Tidak ada aturan-aturan sistematis untuk meneliti dan penyajian data kualitatif. Ketiadaan prosedur analitik sistematis, menjadi sulit bagi peneliti untuk menyajikan kesimpulan.
2. Aspek analisis kualitatif yang kedua yang menantang adalah jumlah besar pekerjaan. Analis kualitatif harus mengorganisir dan bisa dipertimbangkan dari halaman dan bahan-bahan naratif. Halaman itu harus dibaca ulang dan kemudian diorganisir, mengintegrasikan, dan menafsirkan.
3. Tantangan akhir adalah pengurangan data untuk tujuan-tujuan pelaporan. Hasil-hasil utama dari riset kuantitatif dapat diringkas. Jika satu data kualitatif dikompres terlalu banyak, inti dari integritas bahan-bahan naratif sepanjang tahap analisa menjadi hilang. Sebagai konsekuensi, adalah kadang sukar untuk melakukan satu presentasi hasil riset kualitatif dalam suatu format yang kompatibel dengan pembatasan ruang dalam jurnal professional.
Analisis Statistuik Deskriptif
Proses analisis Statistika deskriptif merupakan metode yang berkaitan dengan pengumpulan data sehingga memberikan informasi yang berguna. Upaya penyajian ini dimaksudkan mengungkapkan informasi penting yang terdapat dalam data kedalam berbentuk yang lebih ringkas dan sederhana yang pada akhirnya mengarah pada keperluan adanya penjelasan dan penafsiran (Aunuddin, 1989).
Penyajian Statistika Deskriptif dapat berupa angka, tabel, dan grafik. Selama tidak ada penarikan kesimpulan, pendugaan parameter, peramalan, pengujian hipotesis maka data-data yang disajikan berupa angka, tabel dan grafik tersebut merupakan hasil analisis statistika deksriptif dan bukan analisis statistika inferensia.
Factor Analysis
Factor analysis is a statistical method used to describe variability among observed variables in terms of fewer unobserved variables called factors. The observed variables are modeled as linear combinations of the factors, plus "error" terms. The information gained about the interdependencies can be used later to reduce the set of variables in a dataset. Factor analysis originated in psychometrics, and is used in behavioral sciences, social sciences, marketing, product management, operations research, and other applied sciences that deal with large quantities of data.
Factor analysis is often confused with principal components analysis. The two methods are related, but distinct, though factor analysis becomes essentially equivalent to principal components analysis if the "errors" in the factor analysis model (see below) are assumed to all have the same variance.
Sumber: http://en.wikipedia.org/wiki/Factor_analysis






Cluster Analysis
Clustering is the assignment of objects into groups (called clusters) so that objects from the same cluster are more similar to each other than objects from different clusters. Often similarity is assessed according to a distance measure. Clustering is a common technique for statistical data analysis, which is used in many fields, including machine learning, data mining, pattern recognition, image analysis and bioinformatics.
Besides the term data clustering (or just clustering), there are a number of terms with similar meanings, including cluster analysis, automatic classification, numerical taxonomy, botryology and typological analysis.
Types of clustering
Data clustering algorithms can be hierarchical. Hierarchical algorithms find successive clusters using previously established clusters. Hierarchical algorithms can be agglomerative ("bottom-up") or divisive ("top-down"). Agglomerative algorithms begin with each element as a separate cluster and merge them into successively larger clusters. Divisive algorithms begin with the whole set and proceed to divide it into successively smaller clusters.
http://en.wikipedia.org/wiki/Data_clustering
Hierarchical Analysis
"A hierarchy is an organization of elements that, according to prerequisite relationships, describes the path of experiences a learner must take to achieve any single behavior that appears higher in the hierarchy (Seels & Glasgow, 1990, p. 94)". Thus, in a hierarchical analysis, the instructional designer breaks down a task from top to bottom, thereby, showing a hierarchical relationship amongst the tasks, and then instruction is sequenced bottom up. For example, in the diagram below, task 4 has been decomposed into its enabling tasks implying that the learner cannot perform the third task until he/she has performed the first and second tasks respectively.


How do I conduct a hierarchical analysis?
The starting point for constructing a hierarchy is a comprehensive list of the tasks that make up a job or function. There are three major steps to constructing a hierarchy:
1. Cluster or group the tasks. For inclusion in a group, select tasks that bear close resemblance to each other. Each task must be included in at least one of the groups, but a task may also be common to several groups. Label the groups with terms that emerge from the job or function being analyzed. Initial clustering or grouping of tasks may be tentative. The composition of the groups may change as a result of decisions you make later on. Do not hesitate to regroup tasks when it seems appropriate.
2. Organize tasks within each group to show the hierarchical relationships for learning. Ask yourself "What would the learner have to learn in order to do this task?" Once the essential prerequisite relationships are shown, reevaluate the relationship between each pair of tasks with the question "Can this superordinate task be performed if the learner cannot perform this subordinate task?" The lower level skill must be integrally related to the higher-level skill. The learning types (domains) of the tasks should match horizontally.
3. Confer with a subject matter expert to determine the hierarchy’s accuracy. This step occurs concurrently with Steps 1 and 2.
(Seels & Glasgow, 1990)
http://classweb.gmu.edu/ndabbagh/Resources/Resources2/hierarchical_analysis.htm
Causality Analysis
Cause-effect relationships (causal relationships) are an integral part of our worldview. They are often the targets of scientific investigation, but they are also part of our everyday language. We are interested in knowing if carbon dioxide causes global warming, if smoking leads to cancer, and if cheaper liquor increases the number of people who drown in summer. Also when we ask, "would I have been able to catch the bus, had I ran?" or "will people emigrate if we raise the taxes?", the answers depend on what causal relationships exist in our world.
Causality is one of the oldest concepts in philosophy. Various definitions of what constitutes a cause-effect relationship have been put forth, and the question of if (and when) one can actually identify such a relationship has been studied extensively. One of the main problems has been how to take into account uncertainties. For example, although excessive smoking does not necessarily lead to cancer, we would still like to say that smoking is a cause (among many) of cancer, if it significantly increases the probability of getting ill.
Probability theory (and statistics) is the language with which present-day science describes uncertainty. But it should be noted that this language does not, in its traditional form, include any references to causality. We can express that certain events happen together, and say how frequently they happen, but we cannot describe that one event actually causes another. This is changing, as in the last 15-20 years a lot of work has been done on extending the standard notation and methods to take into account cause-effect relationships. Two recent books on the subject are
http://www.cs.helsinki.fi/u/phoyer/opetus/kausaalisuus/

Time Series Analysis
In statistics, signal processing, and many other fields, a time series is a sequence of data points, measured typically at successive times, spaced at (often uniform) time intervals. Time series analysis comprises methods that attempt to understand such time series, often either to understand the underlying context of the data points (where did they come from? what generated them?), or to make forecasts (predictions). Time series forecasting is the use of a model to forecast future events based on known past events: to forecast future data points before they are measured. A standard example in econometrics is the opening price of a share of stock based on its past performance.
The term time series analysis is used to distinguish a problem, firstly from more ordinary data analysis problems (where there is no natural ordering of the context of individual observations), and secondly from spatial data analysis where there is a context that observations (often) relate to geographical locations. There are additional possibilities in the form of space-time models (often called spatial-temporal analysis). A time series model will generally reflect the fact that observations close together in time will be more closely related than observations further apart. In addition, time series models will often make use of the natural one-way ordering of time so that values in a series for a given time will be expressed as deriving in some way from past values, rather than from future values (see time reversibility.)
Methods for time series analyses are often divided into two classes: frequency-domain methods and time-domain methods. The former centre around spectral analysis and recently wavelet analysis, and can be regarded as model-free analyses well-suited to exploratory investigations. Time-domain methods have a model-free subset consisting of the examination of auto-correlation and cross-correlation analysis, but it is here that partly and fully-specified time series models make their appearance.
http://en.wikipedia.org/wiki/Time_series

Decision Theory
Decision theory in mathematics and statistics is concerned with identifying the values, uncertainties and other issues relevant in a given decision and the resulting optimal decision.
] Normative and descriptive decision theory
Most of decision theory is normative or prescriptive, i.e. it is concerned with identifying the best decision to take, assuming an ideal decision maker who is fully informed, able to compute with perfect accuracy, and fully rational. The practical application of this descriptive approach (how people actually make decisions) is called decision analysis, and aimed at finding tools, methodologies and software to help people make better decisions. The most systematic and comprehensive software tools developed in this way are called decision support systems.
Since it is obvious that people do not typically behave in optimal ways, there is also a related area of study, which is a positive or descriptive discipline, attempting to describe what people will actually do. Since the normative, optimal decision often creates hypotheses for testing against actual behaviour, the two fields are closely linked. Furthermore it is possible to relax the assumptions of perfect information, rationality and so forth in various ways, and produce a series of different prescriptions or predictions about behaviour, allowing for further tests of the kind of decision-making that occurs in practice.
[edit] What kinds of decisions need a theory?
• Choice under uncertainty
• Intertemporal choice
• Competing decision makers
• Complex decisions
• Paradox of choice
• Statistical decision theory
http://en.wikipedia.org/wiki/Decision_theory

Tidak ada komentar:

Posting Komentar