top of page
MEC-103: Quantitative Methods

MEC-103: Quantitative Methods

IGNOU Solved Assignment Solution for 2023-24

If you are looking for MEC-103 IGNOU Solved Assignment solution for the subject Quantitative Methods, you have come to the right place. MEC-103 solution on this page applies to 2023-24 session students studying in MEC courses of IGNOU.

Looking to download all solved assignment PDFs for your course together?

MEC-103 Solved Assignment Solution by Gyaniversity

Assignment Solution

Assignment Code: MEC-103/TMA/2023-24

Course Code: MEC-103

Assignment Name: Quantitative Methods

Year: 2023-2024

Verification Status: Verified by Professor

Section A

Answer the following questions in about 700 words each. The word limits do not apply in case of numerical questions. Each question carries 20 marks.

Q1. i)What do you understand by the terms ‘test of significance’? Why do we need to test any distribution for significance? Explain the reasons.

Ans) In statistical terminology, a test of significance is a method that is used to assess if the results of an experiment or study that were observed are likely to have occurred by chance or whether they are statistically significant that they should be considered significant. A comparison is made between the data that has been seen and a null hypothesis, which assumes that there is no true effect or difference between the two sets of data. The objective is to determine if the differences, associations, or effects that have been observed are statistically significant or whether they may have been the result of random fluctuation.

Reasons for Testing Distributions for Significance

a) Differentiating Between Chance and True Effects:

1) Explanation: In any research or experiment, there is inherent variability, and observed differences may occur by chance. A test of significance helps researchers distinguish between variations that are likely due to random sampling fluctuations and those that are indicative of a true effect or relationship.

2) Example: In a clinical trial testing the effectiveness of a new drug, a test of significance would assess whether the observed improvement in the treatment group is statistically significant or if it could have occurred randomly.

b) Validating Hypotheses:

1) Explanation: Researchers typically formulate hypotheses about the expected relationships or effects in a study. A test of significance provides a systematic way to assess whether the data supports or contradicts these hypotheses, helping to validate or reject them.

2) Example: In educational research, if a hypothesis posits that students who receive a particular teaching intervention will perform better on exams, a test of significance would assess whether the observed differences in exam scores are statistically significant.

c) Generalizability:

1) Explanation: Testing for significance helps in determining whether the results observed in a sample are likely to generalize to the larger population. It provides a level of confidence in extending findings beyond the studied sample.

2) Example: If a survey of a sample of voters indicates a preference for a particular political candidate, a test of significance would assess whether this preference is likely to be reflective of the entire voting population.

d) Decision-Making in Business and Policy:

1) Explanation: In business and policy settings, decisions often rely on data analysis. Testing distributions for significance provides a basis for making informed decisions by determining whether observed patterns or changes are statistically significant or could be due to random fluctuations.

2) Example: A company analyzing the impact of a new marketing strategy might use a test of significance to assess whether the observed increase in sales is statistically significant before deciding to implement the strategy on a larger scale.

e) Quality Control in Manufacturing:

1) Explanation: In manufacturing processes, maintaining consistent product quality is crucial. Testing for significance allows manufacturers to assess whether variations in product specifications or quality control measures are statistically significant or within acceptable limits.

2) Example: A factory producing electronic components might use statistical tests to ensure that variations in the dimensions of the components are not statistically significant, ensuring consistent quality.

f) Scientific Research Reproducibility:

1) Explanation: The replicability of research findings is a cornerstone of scientific research. Testing for significance provides a means to evaluate whether the observed results are reproducible across different samples or studies.

2) Example: In psychological research studying the effects of a cognitive intervention, a test of significance would assess whether the observed improvements in one study are likely to be reproduced in similar studies with different participants.

g) Setting Confidence Intervals:

1) Explanation: Testing for significance helps in setting confidence intervals around estimated parameters. This provides a range within which researchers can be reasonably confident that the true population parameter lies.

2) Example: When estimating the average response time in a psychological experiment, a test of significance helps in determining the range within which the true population average is likely to fall.

h) Ethical Considerations:

1) Explanation: In fields such as medicine and pharmaceuticals, where human well-being is at stake, testing for significance is crucial for ethical reasons. It ensures that decisions about treatments, interventions, or drug efficacy are based on robust statistical evidence rather than chance.

2) Example: Before a new drug is approved for widespread use, rigorous testing for significance is conducted to ensure that any observed benefits are statistically significant and not the result of random chance.

Q1. ii) Discuss the concepts of confidence intervals and confidence limits with the help of an example.

Ans) Confidence Interval

A range of values that is obtained from sample data is referred to as a confidence interval (CI), and it is utilised for the purpose of estimating the range in which a population parameter is likely to locate itself. At the same time, it offers a measurement of the uncertainty that is connected to the estimate. The degree of confidence, which is typically stated as a percentage, is a representation of the possibility that the actual parameter is located inside the parameter interval.

Formula for Confidence Interval:

Confidence Interval=Point Estimate±Margin of Error


Point Estimate is the sample statistic used to estimate the population parameter (e.g., sample mean or proportion).

Margin of Error is a critical value multiplied by the standard error of the point estimate.

Confidence Limits

On the other hand, the particular numbers that determine the extremities of a confidence interval are referred to as confidence bounds. A better way to put it is that they are the lower and upper limits of the range that the population parameter is anticipated to fall inside.

Calculation of Confidence Limits:

Lower Limit=Point Estimate−Margin of Error

Upper Limit=Point Estimate+Margin of Error

Example of Confidence Interval and Confidence Limits is as follows:

Unveiling Confidence: A Bakery Tale of Intervals and Limits

Imagine you're a passionate baker, eager to perfect your signature chocolate chip cookies. You diligently follow the recipe, but a crucial question lingers: how big will they be? Enter the realm of confidence intervals and limits, your trusty guides in navigating the delicious (and sometimes not-so-delicious) realm of uncertainty.

Confidence Interval: Think of it as a safety net for your cookie size prediction. Based on your chosen confidence level (say, 95%), it's the range within which you're highly likely (95% of the time) to find the true average size of your cookies. Think of it as a fence around your oven's temperature setting, ensuring your cookies won't become miniature pebbles or molten puddles.

Confidence Limit: These are the edges of your confidence interval fence. They mark the upper and lower bounds within which you're confident the average cookie size resides. In our baking analogy, these limits might be 2 inches and 3 inches for a 95% confidence interval.

You gather 12 ingredients and meticulously measure them out. You then bake your cookies and measure the diameter of each one, finding an average of 2.5 inches. Now, the exciting part:

Confidence Interval (95%): Based on statistical calculations and your sample size (the number of cookies you measured), there's a 95% chance that the true average diameter of all your cookies in the recipe lies somewhere between 2.2 inches and 2.8 inches. This is your "fence" around the estimated size.

Visualizing the Uncertainty:

Bar Graph with Error Bars:

Imagine a bar graph representing the average cookie diameter of 2.5 inches. On top of the bar, draw two vertical lines extending upwards and downwards, like miniature fences. These lines represent the upper and lower confidence limits (2.2 inches and 2.8 inches). The space between these lines is the shaded area, representing the 95% confidence interval.

The entire bar, including the lines and shaded area, conveys the estimated average cookie size and its likely range. This is a simple and clear way to visualize the information, ideal for presentations or general understanding.

Normal Distribution Curve:

Picture a bell-shaped curve representing the distribution of diameters for all potential cookies from your recipe. Mark the average diameter (2.5 inches) at the peak of the curve. Then, at a distance representing the standard deviation (a measure of variability), mark two points on either side of the peak. These points correspond to the confidence limits. Shade the area under the curve between these points, indicating the 95% confidence interval.

This visualization highlights the probability distribution of the data and visually represents the confidence interval as a portion of the overall population. It's a more nuanced approach, preferred for deeper statistical analysis or technical audiences.

Key Points:

A wider confidence interval indicates greater uncertainty. This could be due to a smaller number of cookies (your sample size) or varying sizes in your batch.

A higher confidence level (like 99%) comes at the cost of a wider interval. It's like building a larger, sturdier fence around your estimate, but encompassing a less specific area.

Remember, confidence intervals are not guarantees. There's a small chance a few cookies might be outside the fence (slightly smaller or larger), but the interval captures the range where most will likely fall.

Q1. iii) Distinguish between one-tailed test and two-tailed test.

Ans) Comparison between one-tailed test and two-tailed test:

Section B


Answer the following questions in about 400 words each. Each question carries 12marks.


Q3) Explain the properties of set operations with examples.


Properties of Set Operations with Examples

Sets are fundamental in mathematics, and set operations allow us to manipulate and analyse sets to derive new sets. The common set operations include union, intersection, difference, and complement. Understanding the properties of these operations is crucial for various branches of mathematics, including set theory and probability.


a)    Union (A ∪ B):

1)      Definition: The union of two sets A and B, denoted as A ∪ B, is the set of all elements that are in A, in B, or in both.

2)    Properties:

i)       Associativity: A∪(B∪C)=(A∪B)∪C

ii)     Commutativity: A∪B=B∪A

iii)    Identity Element: A∪∅=A

iv)    Idempotence: A∪A=A

3)    Example:

i)                 Let A={1,2,3} and B={3,4,5}.

ii)               A∪B={1,2,3,4,5}.


b)   Intersection (A ∩ B):

1)      Definition: The intersection of two sets A and B, denoted as A ∩ B, is the set of all elements that are in both A and B.

2)    Properties:

i)                 Associativity: A∩(B∩C)=(A∩B)∩C

ii)               Commutativity: A∩B=B∩A

iii)              Identity Element: A∩U=A, where U is the universal set.

iv)              Absorption: A∩(A′ ∪B)=A∩B, where A′ is the complement of A.

3)    Example:

i)                 Let A={1,2,3} and B={3,4,5}.

ii)               A∩B={3}.


c)    Set Difference (A - B):

1)      Definition: The set difference of sets A and B, denoted as A - B, is the set of all elements that are in A but not in B.

2)    Properties:

i)                 Not Commutative: A−B≠B−A

ii)               Identity Element: A−∅=A

iii)              Empty Set Difference: A−A=∅

3)    Example:

i)                 Let A={1,2,3} and B={3,4,5}.

ii)               A−B={1,2}.


d)   Complement (A′):

1)      Definition: The complement of a set A, denoted as A′ or Ac , is the set of all elements not in A with respect to a universal set U.

2)    Properties:

i)                 Double Complement: (A′ )′ =A

ii)               Complement of the Universal Set: U′=∅

iii)              Complement of the Empty Set: ∅′=U

3)    Example:

i)                 Let U={1,2,3,4,5} be the universal set, and A={1,2,3}.

ii)               A′={4,5}.


Understanding these properties is crucial in solving problems involving sets, proving theorems, and applying set theory in various mathematical and computational contexts. The properties ensure consistency and provide a foundation for more advanced mathematical structures and concepts.


Q4) What is a discontinuous function? Discuss the two types of discontinuous functions along with their diagrams.

Ans) The concept of a discontinuous function arises as a fascinating aberration in the field of mathematics, which is characterised by the predominance of smooth curves and coherent relationships. A discontinuous function, in contrast to its continuous version, does not possess the quality of linking its neighbouring points in a seamless manner, much like a single thread that is not severed. As a more straightforward explanation, its graph is analogous to a broken path, in which the pen is required to lift off at specific places, resulting in gaps or leaps as we follow its journey.


There are two key categories of discontinuities that disrupt this flow

a)    Removable Discontinuities:

Imagine a bridge that is missing a plank; although the gap causes the voyage to be interrupted, it is not difficult to fix because the missing piece can be simply replaced. A similar phenomenon is known as a detachable discontinuity, which takes place when the value of the function displays an abrupt leap at a certain position, but the limit of the function at that moment is present and equals the value that is absent. One way to think of it is as a momentary hiccup that can be smoothed out by redefining the function at that time such that it is equal to the limit that it already has.

1) Explanation:

i) The blue curve represents the function f(x) = (x-1)/x, defined for all x except 1.

ii) The graph has a gap at x = 1, where the function is undefined.

iii) However, both left-hand and right-hand limits (indicated by the orange arrows) approach 1 as x approaches 1.

iv) By redefining f(1) = 1 (filling the gap with a green dot), the function becomes continuous at x = 1.

b) Essential Discontinuities:

This category is comprised of immovable obstacles that cannot be circumvented, even with the application of ingenious redefinitions. The existence of essential discontinuities occurs either when the limits themselves do not exist or when they do exist but diverge, which means that they approach distinct values from either side of the point. It is not possible to link the two sides in a straightforward manner, much like a chasm that spans an abyss.

1) Explanation:

i) The blue curve represents the function f(x) = 1/x, defined for all x except 0.

ii) The graph has a vertical asymptote at x = 0 (represented by a dashed line).

iii) As x approaches 0 from the left (indicated by the orange arrow on the left), the function tends towards positive infinity (shown by the upward arrow).

iv) As x approaches 0 from the right (orange arrow on the right), the function tends towards negative infinity (downward arrow).

v) No single value can bridge this infinite gap, making the discontinuity at x = 0 essential.

Q5) Explain the concept of a stationary point and inflexion point. Is a stationary point always a point of inflexion? Why or why not?

Ans) In the fascinating dance of curves and slopes that unfolds in calculus, stationary points and inflection points take the spotlight as intriguing milestones on the graph's journey. While both relate to changes in the function's behaviour, their roles are distinct and deserve careful attention.

Stationary Points

Imagine a hiker resting on a plateau during a long trek. This point of no ascent or descent captures the essence of a stationary point. In mathematical terms, a stationary point occurs when the first derivative of a function, f'(x), equals zero. At this point, the slope of the graph flattens, neither increasing nor decreasing. Think of it as a moment of pause before the curve embarks on another climb or plunge.

However, this pause doesn't necessarily reveal the complete story. Just like the plateau could lead to either climbing a mountain or descending into a valley, a stationary point can be a precursor to different scenarios:

a) Local Maximum: If the first derivative changes from positive to negative around the stationary point, it indicates a peak - a local maximum where the function reaches its highest value within a specific interval.

b) Local Minimum: Conversely, if the first derivative changes from negative to positive, it reveals a trough - a local minimum where the function dips to its lowest value within an interval.

c) Saddle Point: In some cases, the first derivative changes sign twice consecutively (positive-negative-positive or negative-positive-negative), resulting in a saddle point. Here, the graph curves inward like a bowl without clearly reaching a maximum or minimum.

Inflection Points

While stationary points reveal moments of paused slope, inflection points mark a more profound shift in the curve's trajectory. They occur when the second derivative, f''(x), equals zero and changes sign around that point. Think of it as a "change of heart" for the curve - switching from bending upwards (concave up) to bending downwards (concave down), or vice versa.

Inflection points are not always stationary points. They can exist independently, even when the first derivative isn't zero. For example, the graph of y = x^3 has an inflection point at x = 0, where the curve changes from concave down to concave up, even though the first derivative (y' = 3x^2) is never zero.

Reason for stationary point not being a point of inflection

Therefore, a stationary point is not always a point of inflection. While certain stationary points (local maxima and minima) are accompanied by changes in concavity and become inflection points, others like saddle points are not. The presence of a stationary point merely indicates a pause in the slope, while an inflection point reveals a more dramatic turning point in the curve's direction.

In conclusion, understanding the subtle differences between stationary points and inflection points allows us to decode the language of curves with greater precision. These points act as signposts, guiding us through the changing landscape of functions and revealing their hidden secrets. By deciphering their meaning, we gain a deeper appreciation for the dynamic dance of slopes and curvature that lies at the heart of calculus.

Q6) Define normal distribution. Discuss the two parameters which are integral to its definition.


Definition of Normal Distribution

A normal distribution, also known as a Gaussian distribution, is a continuous probability distribution that is symmetric and bell-shaped. It is characterized by the famous bell curve, where the majority of data clusters around the mean, and the probabilities decrease as values deviate from the mean. The normal distribution is a fundamental concept in statistics and probability theory and has widespread applications in various fields.

Parameters of the Normal Distribution

The normal distribution is defined by two parameters: the mean (μ) and the standard deviation (σ). These parameters play a crucial role in shaping the distribution and determining its characteristics.

a) Mean (μ):

The mean of a normal distribution represents the central or average value around which the data are centered. In the context of the normal distribution, the mean is also the point of symmetry. Mathematically, the mean is the arithmetic average of all data points in the distribution.

1) Effect on Shape:

i) Changing the mean shifts the entire distribution horizontally.

ii) If the mean increases, the distribution shifts to the right.

iii) If the mean decreases, the distribution shifts to the left.

2) Notation:

μ represents the mean in the formula for the probability density function (PDF) of the normal distribution.

3) Example:

If a normal distribution has a mean (μ) of 50, it implies that the central tendency of the data is around 50.

b) Standard Deviation (σ):

The standard deviation of a normal distribution measures the spread or dispersion of the data points around the mean. It indicates how much individual data points deviate from the average. A smaller standard deviation suggests that data points are clustered closely around the mean, while a larger standard deviation indicates greater dispersion.

1) Effect on Shape:

i) Changing the standard deviation influences the width of the distribution.

ii) A larger standard deviation results in a wider, more spread-out distribution.

iii) A smaller standard deviation leads to a narrower, more concentrated distribution.

2) Notation:

σ represents the standard deviation in the formula for the probability density function (PDF) of the normal distribution.

3) Example:

If a normal distribution has a standard deviation (σ) of 10, it implies that the data points vary around the mean by approximately 10 units.

c) Probability Density Function (PDF):

The probability density function of the normal distribution is given by the formula:



x is a random variable.

μ is the mean.

σ is the standard deviation.

π is the mathematical constant pi (approximately 3.14159).

e is the base of the natural logarithm.

In conclusion, the normal distribution is characterized by its mean and standard deviation. These parameters define the central tendency and the spread of the data, providing a comprehensive understanding of the distribution's shape and characteristics. The normal distribution is a fundamental concept in statistics and serves as a basis for various statistical analyses and hypothesis testing procedures.

Q7) Write short notes on the following:

Q7 i) Local maxima

Ans) In mathematical analysis, a local maximum (plural: maxima) refers to a point in the domain of a function where the function attains its highest value in a specific neighbourhood of that point. Local maxima are critical points in the analysis of functions and play a significant role in determining the behaviour of a function in its vicinity.

Characteristics of Local Maxima

a) Derivative Test: To identify local maxima, one commonly uses the first derivative test. At a local maximum, the first derivative of the function changes sign from positive to negative. In other words, the slope of the function is positive before the maximum and becomes negative immediately after.

b) Graphical Representation: On the graph of a function, local maxima correspond to peaks or high points. The tangent line at a local maximum is horizontal, indicating a slope of zero.

c) Critical Points: Local maxima are critical points where the derivative of the function is zero or undefined. However, not all critical points are local maxima; some may be points of inflection or local minima.

In summary, local maxima are points where a function reaches a peak in a specific region. They are essential in understanding the behaviour of functions and are often employed in optimization problems and calculus.

Q7 ii) Mapping and function

Ans) Mapping

A mapping, in mathematics, refers to a relationship or correspondence between two sets that assigns each element from the first set to exactly one element in the second set. Mappings are fundamental in various branches of mathematics, providing a way to describe how elements from one set are paired with elements in another set. A mapping is often represented by an arrow diagram or using set notation.

a) Representation: A mapping from set A to set B is often denoted as f:A→B, where f is the name of the mapping.

b) Injective (One-to-One): A mapping is injective if each element in set A maps to a unique element in set B. No two distinct elements in A map to the same element in B.

c) Surjective (Onto): A mapping is surjective if every element in set B has at least one pre-image in set A.

d) Bijective: A mapping is bijective if it is both injective and surjective. In a bijective mapping, each element in set A has a unique image in set B, and every element in set B has a pre-image in set A.


A function is a special type of mapping that satisfies the condition that each element in the domain (input) is associated with exactly one element in the codomain (output). Functions are a fundamental concept in calculus, analysis, and various branches of mathematics.

a) Domain and Codomain: A function f:A→B implies that A is the domain, B is the codomain, and each element a∈A is associated with a unique element b∈B.

b) Notation: A function is often denoted by y=f(x), where y is the output, x is the input, and f is the function.

c) Graphical Representation: The graph of a function is a visual representation that illustrates the relationship between the input and output values. The vertical line test helps determine if a graph represents a function.

d) Types of Functions: Functions can be linear, quadratic, exponential, trigonometric, and more, each with specific properties and behaviours.

In summary, mapping is a general concept describing relationships between sets, while a function is a specific type of mapping that adheres to the condition of associating each element in the domain with a unique element in the codomain. Functions are fundamental in modelling real-world phenomena and solving mathematical problems.

Q7 iii) Biases in the survey

Ans) Surveys are widely used in research to collect data and insights from a sample of individuals or a population. However, biases can affect the accuracy and reliability of survey results. Understanding these biases is crucial for ensuring the validity of findings. Here are some common biases in surveys:

a) Selection Bias:

1) Description: Occurs when the sample selected for the survey is not representative of the entire population.

2) Impact: Results may not generalize to the broader population, leading to skewed conclusions.

b) Non-Response Bias:

1) Description: Arises when certain groups within the sample are more likely to respond than others.

2) Impact: Can result in underrepresentation of specific perspectives, creating a distorted view of the population.

c) Social Desirability Bias:

1) Description: Respondents may provide answers that align with social norms or are perceived as socially acceptable rather than expressing their true opinions or behaviours.

2) Impact: Skews results by presenting a more favourable or conforming picture than reality.

d) Response Bias:

1) Description: Occurs when respondents provide inaccurate or misleading answers, either unintentionally or deliberately.

2) Impact: Leads to unreliable data, affecting the validity of survey results.

e) Sampling Bias:

1) Description: Results from an unrepresentative sample selection, often due to flaws in the sampling method.

2) Impact: Findings may not accurately reflect the characteristics of the target population.

f) Recall Bias:

1) Description: Respondents may have difficulty accurately recalling past events or experiences, leading to inaccuracies in their responses.

2) Impact: Introduces errors in data related to historical or retrospective information.

g) Cultural Bias:

1) Description: Arises when survey questions, design, or interpretation are culturally insensitive, leading to misinterpretation or exclusion of certain groups.

2) Impact: Hampers the cross-cultural applicability of survey results.

h) Confirmation Bias:

1) Description: Survey designers, analysts, or respondents may unconsciously seek or interpret information that confirms pre-existing beliefs or hypotheses.

2) Impact: Results may be skewed, and conclusions may be drawn based on preconceived notions rather than objective analysis.

Addressing and minimizing biases in surveys is essential for producing reliable and valid data. Strategies such as random sampling, careful question design, and awareness of potential biases can help mitigate these issues and enhance the quality of survey findings. Regular validation and verification processes should also be employed to ensure the accuracy of collected data.

Q7 iv) Finite and infinite sets


Finite Sets

A finite set is a set that contains a definite, countable number of elements. In other words, the number of elements in a finite set can be determined, and this number is a non-negative integer.

a) Example: Let A={1,2,3} be a finite set. The elements in set A can be counted, and there are three elements in total.

b) Cardinality: The cardinality of a finite set is the number of elements it contains. For a finite set A, the cardinality is denoted as ∣A∣.

c) Properties:

1) Operations on finite sets, such as union, intersection, and complement, can be easily defined.

2) Finite sets are well-defined and can be listed explicitly.

Infinite Sets

An infinite set is a set that contains an infinite number of elements. Unlike finite sets, the elements in an infinite set cannot be counted exhaustively, and there is no last element.

1) Example: Let B={1,2,3,} be an infinite set representing the set of natural numbers. The ellipsis (...) indicates that the set continues indefinitely.

2) Cardinality: The cardinality of an infinite set is not a finite number. Instead, infinite sets are categorized based on their cardinality, such as countably infinite or uncountably infinite.

3) Properties:

i) Operations on infinite sets often require more sophisticated tools, such as limits and convergence.

ii) Infinite sets may exhibit surprising properties, such as having the same cardinality as proper subsets.

Understanding the distinction between finite and infinite sets is fundamental in set theory and various branches of mathematics. The behaviour and properties of these sets significantly impact mathematical reasoning and proofs, providing a foundation for more advanced concepts such as limits, continuity, and infinite series.

100% Verified solved assignments from ₹ 40  written in our own words so that you get the best marks!
Learn More

Don't have time to write your assignment neatly? Get it written by experts and get free home delivery

Learn More

Get Guidebooks and Help books to pass your exams easily. Get home delivery or download instantly!

Learn More

Download IGNOU's official study material combined into a single PDF file absolutely free!

Learn More

Download latest Assignment Question Papers for free in PDF format at the click of a button!

Learn More

Download Previous year Question Papers for reference and Exam Preparation for free!

Learn More

Download Premium PDF

Assignment Question Papers

Which Year / Session to Write?

Get Handwritten Assignments

bottom of page