If you are looking for MEC-109 IGNOU Solved Assignment solution for the subject Research Methods in Economics, you have come to the right place. MEC-109 solution on this page applies to 2021-22 session students studying in MEC courses of IGNOU.
MEC-109 Solved Assignment Solution by Gyaniversity
Assignment Code: MEC-109/AST/2021-22
Course Code: MEC-109
Assignment Name: Research Methods in Economics
Year: 2021-2022 (July 2021 and January 2022)
Verification Status: Verified by Professor
Note: Answer all the questions. While questions in Section A carry 20 marks each (to be answered in about 700 words each) those in Section B carry 12 marks each (to be answered in about 500 words each). In case of numerical questions, word limit does not apply.
Â
Section-A
Â
1. Distinguish between realism and construction. Explain the various approaches that are resorted to conduct research in social sciences under constructivism.
Ans) Â Some authors have classified paradigms into three categories by re-naming them as
Realism begins by assuming that there is a real world that is external to the experience of any particular person and the goal of research is to understand that world.
Constructivism begins by assuming that everyone has unique experience and beliefs and it posits that no reality exists outside of those perceptions.
Pragmatism considers realism and constructivism as two alternate ways to understand the world. However, the questions about the nature of reality are less important than questions about what is meant to act and experience the consequence of those actions.
The knowledge of all these perspectives enable a researcher to make a meaningful choice about
the research problem;
the research questions to investigate this problem;
the research strategies to answer these questions;
the approaches to social enquiry that accompany these strategies;
the concept and theory that direct the investigation;
the sources, forms and types of data;
the methods for collecting and analyzing the data to answer these questions.
Broadly two types of approaches are used in conducting research in social sciences: quantitative and qualitative. The studies conducted within the perspective (framework) of positivism/post-positivism/realism generally resort the quantitative approach and are termed as ‘quantitative research’. Quantitative research integrates purposes and procedures that are deductive, objective and generalized. Emphasis is laid on the construction of general theories which are applied universally. Well controlled procedures with large number of cases are followed in conducting the studies.
On other hand, the studies conducted within the perspective of critical theory and interpretivism paradigms are termed as qualitative research. By using induction as a research strategy, qualitative research creates the theory and discovery through flexible, emergent research designs. It tries to evolve meaning and interpretation based on closer contacts between researchers and the people they study. Thus qualitative research consists of purposes and procedures that integrate inductive, subjective and contextual approaches. Based on the above outlines on the types of research – we may say that there are two basic approaches to research viz., quantitative approach and qualitative approach.
Mixed methods research by combining quantitative methods and qualitative methods are being used in social sciences. Hence, mixed method design by integrating quantitative and qualitative approach has also emerged as an approach to social enquiry. Quantitative approach can be further sub-classified into: inferential approach; experimental approach; and simulation approach. In Inferential approach, database is established through survey method and inference is drawn about characteristics or relationship of variables. In experimental approach, greater control is exercised over research environment. Some variables are manipulated to observe their effect on other variables.
Â
2. Identify the various approaches followed by Indian Statistical system in generating data on various economic and social phenomenons which precautions should be taken care of in using data collected from various sources.
Ans) The Indian Statistical System uses six ways to generate statistics on a variety of economic and social events. To begin with, Central Acts such as the Census Act of 1948 and the Gathering of Statistics Act of 2008, among others, allow government agencies to undertake large-scale sample surveys for data collection at regular intervals. Second, statutory returns required by other Acts such as the Factories Act of 1948, the Companies Act of 1956, the Reserve Bank of India Act of 1953, and the Registration of Births and Deaths Act of 1969, as well as the implementation of these Acts, generate data on topics not covered by the surveys. Third, statistics gathered by separate Ministries, Departments, and Organizations of the Central and State Governments as part of their respective tasks represent the current state of affairs in various sectors and sub-sectors of the economy, as well as administrative divisions of the country. Fourth, these organisations' administrative reports enhance such data. Next, data derived from the aforementioned data flows, such as the National Accounts Statistics (NAS), price and production index numbers, and indices like the Human Development Index and Gender Development Index, provide readily usable inputs for research and policy, as well as evaluation of the impact of development policies and programmes on the economy and society. Finally, a vast number of surveys and research studies on a variety of themes done by various public and commercial agencies comprise another flow of data and information.
The Government of India's Ministry of Statistics and Programme Implementation is the apex body in the country's official statistical system. The Ministry is led by the country's Chief Statistician. The Central Statistics Office (CSO), the National Sample Survey Office (NSSO), and the Computer Centre make up the Ministry (CC). At the state/UT level, the Directorates of Economics and Statistics are in charge. The CSO is in charge of coordinating the country's statistical efforts, establishing and maintaining statistical norms and standards, and liaising with central, state, and international statistical bodies. A National Statistical Commission (NSC) is also in place, with a Chairman and Members who are eminent economists and statisticians from research institutions, as well as representatives from Central Ministries and Departments and State Directorates of Economics and Statistics, to provide I overall guidance for statistical development in the country, (ii) policy advice to the government, and (iii) ensure effective coordination of all statistical activities. The NSC Secretariat is the CSO.
As the country's central statistical authority, the CSO is in charge of coordinating statistical activities as well as developing and maintaining statistical standards. Three CSO publications, namely the Sources and Methods of National Accounts Statistics, the National Industrial Classification, and the Consumer Price Index, should be mentioned in this context. The first two are on ad hoc basis, while the third is on a monthly basis. The first discusses the methodologies and classification principles to be used in the creation of macroeconomic aggregates, the second discusses the classification principles to be used for national and international comparability, and the third discusses pricing indices per state/UT.
Â
SECTION B
Â
Â
3. What do you mean research design? State its various constituents. What type of research design do you suggest for descriptive research?
Ans) Research design is a logical structure of an enquiry and its formulation is guided and determined by the research questions raised in the problem under investigation. Apart from specifying the logical structure of data, research design also test and eliminate alternative explanation. Broadly, the observational design sampling design and statistical design are covered in Research Design. The various attributes of people, objects or concepts are being increasingly included in explanation of human behaviour in Economics. Hence, these individual traits, attitudes need to measure for deeper analysis. Research Design and measurement issues, therefore constitute the theme of this block.
Research in common parlance refers to a search for knowledge. It can be defined as a scientific and systematic enquiry either to discover new facts or to verify old facts, their sequences, interrelationships, causal explanation and the adherence to natural laws governing them. It thus aims to discover the truth by applying scientific methods.
Research Methodology is a wider term. It consists of three important elements:
theoretical perspectives or orientation to guide research and logic of enquiry,
tools and techniques of data collection, and
methods of data analysis.
The term "research methods" refers to the procedures and tools used in doing research. The practical components of obtaining data, as well as the method the information/data obtained/collected is structured and analysed, are referred to as research techniques. The instruments used for data collecting and analysis are known as tools. It has questionnaires, timetables, diaries, check lists, maps, images, and sketches, among other things. Quantitative data is collected primarily through census and survey procedures. Participant observation, semistructured interviews, life narratives, experiments, pilot studies, scenarios, and other methods are used to collect data in qualitative research. A collection of statistical approaches is used in data analysis to create correlations between variables and to assess the accuracy of the results. As a result, the research process is divided into three parts: methodology, methods, and tools/techniques. In many cases, any one of these three criteria may not be sufficient. For example, without proper knowledge of data gathering methodologies, no data can be routinely collected. Similarly, data cannot be comprehended without first understanding the philosophy or point of view that underpins the qualities that underpin the variables to which the data pertains. To analyse the data effectively, you'll need a solid understanding of statistical procedures.
Descriptive research is used to describe a scenario, a series of events, or a social system. Its goal is to describe the current situation. Descriptive research includes many types of surveys and fact-finding inquiries. In descriptive research investigations, a variety of survey methodologies are utilised, including comparative and correlational methods. A survey of rural/urban labor's socioeconomic situations is a type of descriptive study. The researchers in descriptive research studies have no influence over the variables. They can only report on what has occurred or is currently occurring. 'How does X vary with Y?' or 'How does malnutrition vary with age and sex?' are examples of descriptive study questions. The goal of explanatory research is to determine the cause and effect relationship. The researcher analyses and evaluates the data/information using the facts or information currently available to them. 'Whether increased agricultural output is explained by improved rural roads?' is an example of explanatory study.
Â
4. Distinguish between statistical hypothesis and Research hypothesis. State whether the term ‘hypothesis’ used in testing the unknown population regression parameters is statistical or Research hypothesis. Give reasons.
Ans) A statistical hypothesis is a hypothesis that is testable on the basis of observed data modelled as the realised values taken by a collection of random variables.[1] A set of data is modelled as being realised values of a collection of random variables having a joint probability distribution in some set of possible joint distributions. The hypothesis being tested is exactly that set of possible probability distributions. A statistical hypothesis test is a method of statistical inference. An alternative hypothesis is proposed for the probability distribution of the data, either explicitly or only informally. The comparison of the two models is deemed statistically significant if, according to a threshold probability—the significance level—the data would be unlikely to occur if the null hypothesis were true. A hypothesis test specifies which outcomes of a study may lead to a rejection of the null hypothesis at a pre-specified level of significance, while using a pre-chosen measure of deviation from that hypothesis (the test statistic, or goodness-of-fit measure). The pre-chosen level of significance is the maximal allowed "false positive rate". One wants to control the risk of incorrectly rejecting a true null hypothesis.
A research hypothesis is a specific, clear, and testable proposition or predictive statement about the possible outcome of a scientific research study based on a particular property of a population, such as presumed differences between groups on a particular variable or relationships between variables. Specifying the research hypotheses is one of the most important steps in planning a scientific quantitative research study. A quantitative researcher usually states an a priori expectation about the results of the study in one or more research hypotheses before conducting the study, because the design of the research study and the planned research design often is determined by the stated hypotheses. Thus, one of the advantages of stating a research hypothesis is that it requires the researcher to fully think.
 The term hypothesis used in testing the unknown population regression parameters is statistical hypothesis. the scope of a regression model is just not restricted to the estimation of the parameters. An important purpose of the sample estimates is to test some hypotheses about the unknown population regression parameters with their help. And we can do this if we make some assumption about the distribution of U. This we do by making the assumption of normality for U. The assumption essentially means that the population regression disturbance term follows normal distribution with mean zero, a constant variance equal to s 2 and a zero covariance. In fact U has a conditional distribution, in the sense, that, for each of the given values of the nonstochastic independent variable X, we might have a distribution of different values of U.
Â
5. What is factor analysis? State with illustration the various uses of factor analysis.
Ans) The phrase "factor analysis" refers to a group of comparable but separate multi-variate statistical models that model observed variables as linear functions of a set of latent or hypothetical variables (also known as factors) that are not directly seen.
In the sense that both types of models have dependent variables that are linear functions of independent variables, factor analysis models are similar to regression models. They are, however, distinct in that the independent variables in factor analysis models are not seen independently of the observed dependent variables.
In factor analysis models, the factor variables can be either definite or indeterminate. The numerous component analysis models, such as Principal Components Analysis, weighted principal components, and Gultman's image analysis, are included in the determinate models. The common factor model, which attempts to account for co-variation between observed variables and a set of common factor variables, represents indeterminate models.
Determinate factor analysis is more beneficial for data reduction since it identifies a smaller number of variables that represent the majority of the variation and co-variation information among the observed variables. Indeterminate models, on the other hand, have common elements that are indeterminate from the observed variables and are not linear combinations of them. In this unit, we will just talk about determinate factor analysis.
Factor analysis is based on a model in which an unobserved systematic part and an unobserved error part are partitioned from the observed vector. Multivariate Analysis (MVA):
The error vector's AII components are treated as uncorrelated or independent, while the systematic part is treated as a linear combination of a small number of unobserved factor factors. The study distinguishes between the effects of the components of primary interest and the errors. In another sense, the analysis describes or explains the interdependence of a set of variables in terms of the factors without taking into account the observed variability.
Factor analysis is used to identify latent constructs or factors. It is commonly used to reduce variables into a smaller set factors to save time and facilitate easier interpretations. There are many extraction techniques such as Principal component method and Maximum Likelihood. The factor analysis and the principal component analysis are among the oldest of the multivariate statistical methods of data reduction. Mathematically, factor analysis is complex and the criteria used to determine the number and significance of factors are vast.
Â
6. What is the difference between data collection and data generation? State the various steps involved in analysis of qualitative data.
Ans) Â The fundamental premise of participatory research involves the direct participation of researcher in the process of data generation. Important distinction is made here between data collection and data generation. When data is collected by an agency on large sample populations, standardizations are made with the assumption that all those being interviewed will understand and respond to the questions in the manner in which the primary researcher has conceptualized it.
In the field situation this may or may not be the case. Each field researcher asking those questions may convey a different meaning and the respondent may give answer that may not fit into any of the standard categories, but the field researcher will reduce the answer and fit it in any of the given categories in a structured schedule. Hence, the results obtained may not necessarily reflect the market sentiments or the opinion of the people involved in the study. It is for this reason that this process of procuring data using survey method and questionnaire is called data collection.
In participatory research, the primary researcher is always in contact with the respondent and has a face-to-face interaction. If the researcher thinks that the respondent has not understood the query, he has the freedom to change the language or reconstruct his probe question or collect information from other indirect source. In this approach called data generation, the researcher has the flexibility to generate multiple answers to a single query and then use his/her interpretative skills to draw inference or meaning out of it to arrive at a generalization. In participatory method of data generation, the process of data collection and analysis proceeds simultaneously, making it more reliable and Participatory Method presenting plurality of responses and possibilities. It is this flexibility and its ability to generate reliable generalizations that participatory methods of research have acquired importance in research methodology being used by present day economists.
Â
Steps for doing analysis of qualitative data:
Compilations of field notes and observations recorded in the field diary.
Some scholars prefer to have complete verbatim account of data transcribed but others opt to use ‘indexed recordings and notes’ (ibid: 192).
Interrogation of the data and diligence shown by the researcher is the key to producing good transcripts and reliable generalizations.
Codes and themes have to be developed to capture meaning.
Grounded theory that uses concepts developed in the field by the respondent is an important way of theorizing and analysing in qualitative research.
Both computer and Manual analysis can be done.
NUD*IST, ATLAS/TI are some of the Computer software used for qualitative analysis.
Â
7. Distinguish between any three of the followings:
iii. Interval scale and Ration Scale of measurement
Ans) Interval Scale: In a situation when we not only talk about differences in order but also differences in the degree of order, it is referred to as interval scale. For example, if we are asked to rate our satisfaction with a piece of software on a 7 point scale, from dissatisfied to satisfied, we are using an interval scale.
Mean and standard deviation, correlation, regression, analysis of variance, factor analysis techniques can be used with interval scale data.
Ratio Scale: A ratio scale is the top level of measurement and satisfies the following properties:
Measurement of each observation of a variable in numerals (quantitative terms) and hence possible to work out the ratio of two observations. For a variable X taking two values X1 and X2 the ratio will be X1 / X2.
Measurement of distance between two observation X1 and X2 i.e. (X2 – X1).
Indication of the natural ordering (ascending or descending) of the elements of a variable. Therefore, comparison such as X2 > X1 or X2 > X1 are meaningful.
The statistical techniques used in interval scale can easily be used in ratio scale also.
Â
iv. Ontology and Epistemology
Ans) Ontology is the branch of philosophy that studies concepts such as existence, being, becoming, and reality. It includes the questions of how entities are grouped into basic categories and which of these entities exist on the most fundamental level. Ontology is sometimes referred to as the science of being and belongs to the major branch of philosophy known as metaphysics.
Epistemology is the branch of philosophy concerned with knowledge. Epistemologists study the nature, origin, and scope of knowledge, epistemic justification, the rationality of belief, and various related issues. Epistemology is considered a major subfield of philosophy, along with other major subfields such as ethics, logic, and metaphysics
v. Verification and corroboration
Ans) Verification refers to the use of empirical data, observation, test or experiment to confirm the truth or rational justification of a hypothesis.
In the early phase the emphasis was on of verification – only complete verification by observational evidence to be considered empirically meaningful.
Corroborating evidence (or corroboration) is evidence that tends to support a proposition that is already supported by some initial evidence, therefore confirming the proposition. For example, W, a witness, testifies that she saw X drive his automobile into a green car. Meanwhile, Y, another witness, testifies that when he examined X's car, later that day, he noticed green paint on its fender. There can also be corroborating evidence related to a certain source, such as what makes an author think a certain way due to the evidence that was supplied by witnesses or objects
100% Verified solved assignments from ₹ 40 written in our own words so that you get the best marks!
Don't have time to write your assignment neatly? Get it written by experts and get free home delivery
Get Guidebooks and Help books to pass your exams easily. Get home delivery or download instantly!
Download IGNOU's official study material combined into a single PDF file absolutely free!
Download latest Assignment Question Papers for free in PDF format at the click of a button!
Download Previous year Question Papers for reference and Exam Preparation for free!