Curve Fitting: Linear Regression (2023)

Regression is all about fitting a low order parametric model or curve to data, so we canreason about it or make predictions on points not covered by the data. Both data andmodel are known, but we'd like to find the model parameters that make the model fit bestor good enough to the data according to some metric.

We may also be interested in how well the model supports the data or whether we betterlook for another more appropriate model.

In a regression, a lot of data is reduced and generalized into a few parameters.The resulting model can obviously no longer reproduce all the original data exactly -if you need the data to be reproduced exactly, have a look at interpolation instead.

Simple Regression: Fit to a Line

In the simplest yet still common form of regression we would like to fit a line\(y : x \mapsto a + b x\) to a set of points \((x_j,y_j)\), where \(x_j\) and \(y_j\) are scalars.Assuming we have two double arrays for x and y, we can use Fit.Line to evaluate the \(a\) and \(b\)parameters of the least squares fit:

double[] xdata = new double[] { 10, 20, 30 };double[] ydata = new double[] { 15, 20, 25 };Tuple<double, double> p = Fit.Line(xdata, ydata);double a = p.Item1; // == 10; interceptdouble b = p.Item2; // == 0.5; slope

Or in F#:

let a, b = Fit.Line ([|10.0;20.0;30.0|], [|15.0;20.0;25.0|])

How well do these parameters fit the data? The data points happen to be positionedexactly on a line. Indeed, the coefficient of determinationconfirms the perfect fit:

GoodnessOfFit.RSquared(xdata.Select(x => a+b*x), ydata); // == 1.0

Linear Model

In practice, a line is often not an adequate model. But if we can choose a model that is linear,we can leverage the power of linear algebra; otherwise we have to resort to iterative methods(see Nonlinear Optimization).

A linear model can be described as linear combination of \(N\) arbitrary but knownfunctions \(f_i(x)\), scaled by the model parameters \(p_i\). Note that none of the functions\(f_i\) depends on any of the \(p_i\) parameters.

(Video) Linear Regression Using Least Squares Method - Line of Best Fit Equation

\[y : x \mapsto p_1 f_1(x) + p_2 f_2(x) + \cdots + p_N f_N(x)\]

If we have \(M\) data points \((x_j,y_j)\), then we can write the regression problem as anoverdefined system of \(M\) equations:

\[\begin{eqnarray} y_1 &=& p_1 f_1(x_1) + p_2 f_2(x_1) + \cdots + p_N f_N(x_1) \\ y_2 &=& p_1 f_1(x_2) + p_2 f_2(x_2) + \cdots + p_N f_N(x_2) \\ &\vdots& \\ y_M &=& p_1 f_1(x_M) + p_2 f_2(x_M) + \cdots + p_N f_N(x_M)\end{eqnarray}\]

Or in matrix notation with the predictor matrix \(X\) and the response \(y\):

\[\begin{eqnarray} \mathbf y &=& \mathbf X \mathbf p \\ \begin{bmatrix}y_1\\y_2\\ \vdots \\y_M\end{bmatrix} &=& \begin{bmatrix}f_1(x_1) & f_2(x_1) & \cdots & f_N(x_1)\\f_1(x_2) & f_2(x_2) & \cdots & f_N(x_2)\\ \vdots & \vdots & \ddots & \vdots\\f_1(x_M) & f_2(x_M) & \cdots & f_N(x_M)\end{bmatrix} \begin{bmatrix}p_1\\p_2\\ \vdots \\p_N\end{bmatrix}\end{eqnarray}\]

Provided the dataset is small enough, if transformed to the normal equation\(\mathbf{X}^T\mathbf y = \mathbf{X}^T\mathbf X \mathbf p\) this can be solved efficiently by theCholesky decomposition (do not use matrix inversion!).

Vector<double> p = MultipleRegression.NormalEquations(X, y);

Using normal equations is comparably fast as it can dramatically reduce the linear algebra problemto be solved, but that comes at the cost of less precision. If you need more precision, try usingMultipleRegression.QR or MultipleRegression.Svd instead, with the same arguments.

Polynomial Regression

To fit to a polynomial we can choose the following linear model with \(f_i(x) := x^i\):

\[y : x \mapsto p_0 + p_1 x + p_2 x^2 + \cdots + p_N x^N\]

(Video) Easily solve Curve fitting Straight line y=ax+b very good example(PART-1)

The predictor matrix of this model is the Vandermonde matrix.There is a special function in the Fit class for regressions to a polynomial,but note that regression to high order polynomials is numerically problematic.

double[] p = Fit.Polynomial(xdata, ydata, 3); // polynomial of order 3

Multiple Regression

The \(x\) in the linear model can also be a vector \(\mathbf x = [x^{(1)}\; x^{(2)} \cdots x^{(k)}]\)and the arbitrary functions \(f_i(\mathbf x)\) can accept vectors instead of scalars.

If we use \(f_i(\mathbf x) := x^{(i)}\) and add an intercept term \(f_0(\mathbf x) := 1\)we end up at the simplest form of ordinary multiple regression:

\[y : x \mapsto p_0 + p_1 x^{(1)} + p_2 x^{(2)} + \cdots + p_N x^{(N)}\]

For example, for the data points \((\mathbf{x}_j = [x^{(1)}_j\; x^{(2)}_j], y_j)\) with values([1,4],15), ([2,5],20) and ([3,2],10) we can evaluate the best fitting parameters with:

double[] p = Fit.MultiDim( new[] {new[] { 1.0, 4.0 }, new[] { 2.0, 5.0 }, new[] { 3.0, 2.0 }}, new[] { 15.0, 20, 10 }, intercept: true);

The Fit.MultiDim routine uses normal equations, but you can always choose to explicitly use e.g.the QR decomposition for more precision by using the MultipleRegression class directly:

double[] p = MultipleRegression.QR( new[] {new[] { 1.0, 4.0 }, new[] { 2.0, 5.0 }, new[] { 3.0, 2.0 }}, new[] { 15.0, 20, 10 }, intercept: true);

Arbitrary Linear Combination

In multiple regression, the functions \(f_i(\mathbf x)\) can also operate on the wholevector or mix its components arbitrarily and apply any functions on them, provided they aredefined at all the data points. For example, let's have a look at the following complicated but still linearmodel in two dimensions:

(Video) Demo of Curve Fitting: Linear Regression, Polynomial, and Linear and Cubic Splines

\[z : (x, y) \mapsto p_0 + p_1 \mathrm{tanh}(x) + p_2 \psi(x y) + p_3 x^y\]

Since we map (x,y) to (z) we need to organize the tuples in two arrays:

double[][] xy = new[] { new[]{x1,y1}, new[]{x2,y2}, new[]{x3,y3}, ... };double[] z = new[] { z1, z2, z3, ... };

Then we can call Fit.LinearMultiDim with our model, which will return an array with the best fitting 4 parameters \(p_0, p_1, p_2, p_3\):

double[] p = Fit.LinearMultiDim(xy, z, d => 1.0, // p0*1.0 d => Math.Tanh(d[0]), // p1*tanh(x) d => SpecialFunctions.DiGamma(d[0]*d[1]), // p2*psi(x*y) d => Math.Pow(d[0], d[1])); // p3*x^y

Evaluating the model at specific data points

Let's say we have the following model:

\[y : x \mapsto a + b \ln x\]

For this case we can use the Fit.LinearCombination function:

double[] p = Fit.LinearCombination( new[] {61.0, 62.0, 63.0, 65.0}, new[] {3.6,3.8, 4.8, 4.1}, x => 1.0, x => Math.Log(x)); // -34.481, 9.316

In order to evaluate the resulting model at specific data points we can manually applythe values of p to the model function, or we can use an alternative function with the Funcsuffix that returns a function instead of the model parameters. The returned functioncan then be used to evaluate the parametrized model:

(Video) The Main Ideas of Fitting a Line to Data (The Main Ideas of Least Squares and Linear Regression.)

Func<double,double> f = Fit.LinearCombinationFunc( new[] {61.0, 62.0, 63.0, 65.0}, new[] {3.6, 3.8, 4.8, 4.1}, x => 1.0, x => Math.Log(x));f(66.0); // 4.548

Linearizing non-linear models by transformation

Sometimes it is possible to transform a non-linear model into a linear one.For example, the following power function

\[z : (x, y) \mapsto u x^v y^w\]

can be transformed into the following linear model with \(\hat{z} = \ln z\) and \(t = \ln u\)

\[\hat{z} : (x, y) \mapsto t + v \ln x + w \ln y\]

var xy = new[] {new[] { 1.0, 4.0 }, new[] { 2.0, 5.0 }, new[] { 3.0, 2.0 }};var z = new[] { 15.0, 20, 10 };var z_hat = z.Select(r => Math.Log(r)).ToArray(); // transform z_hat = ln(z)double[] p_hat = Fit.LinearMultiDim(xy, z_hat, d => 1.0, d => Math.Log(d[0]), d => Math.Log(d[1]));double u = Math.Exp(p_hat[0]); // transform t = ln(u)double v = p_hat[1];double w = p_hat[2];

Weighted Regression

Sometimes the regression error can be reduced by dampening specific data points.We can achieve this by introducing a weight matrix \(W\) into the normal equations\(\mathbf{X}^T\mathbf{y} = \mathbf{X}^T\mathbf{X}\mathbf{p}\). Such weight matricesare often diagonal, with a separate weight for each data point on the diagonal.

\[\mathbf{X}^T\mathbf{W}\mathbf{y} = \mathbf{X}^T\mathbf{W}\mathbf{X}\mathbf{p}\]

var p = WeightedRegression.Weighted(X,y,W);

Weighter regression becomes interesting if we can adapt them to the point of interestand e.g. dampen all data points far away. Unfortunately this way the model parametersare dependent on the point of interest \(t\).

(Video) How to fit non-linear equations in excel using solver

// warning: preliminary apivar p = WeightedRegression.Local(X,y,t,radius,kernel);

Regularization

Iterative Methods

val a : obj

val b : obj

FAQs

How do you know if curve fit is good? ›

The adjusted R-square statistic is generally the best indicator of the fit quality when you add additional coefficients to your model. The adjusted R-square statistic can take on any value less than or equal to 1, with a value closer to 1 indicating a better fit. A RMSE value closer to 0 indicates a better fit.

How many data points are enough? ›

Lilienthal's rule: If you want to fit a straight-line to your data, be certain to collect only two data points. A straight line can always be made to fit through two data points. Corollary: If you are not concerned with random error in your data collection process, just collect three data points.

How many data points are enough for regression? ›

For example, in regression analysis, many researchers say that there should be at least 10 observations per variable. If we are using three independent variables, then a clear rule would be to have a minimum sample size of 30.

How do you tell if a regression equation is a good fit? ›

The best fit line is the one that minimises sum of squared differences between actual and estimated results. Taking average of minimum sum of squared difference is known as Mean Squared Error (MSE). Smaller the value, better the regression model.

What is best fit in curve fitting? ›

Curve fitting is one of the most powerful and most widely used analysis tools in Origin. Curve fitting examines the relationship between one or more predictors (independent variables) and a response variable (dependent variable), with the goal of defining a "best fit" model of the relationship.

What is the most efficient curve fitting method in linear regression? ›

The most common way to fit curves to the data using linear regression is to include polynomial terms, such as squared or cubed predictors. Typically, you choose the model order by the number of bends you need in your line. Each increase in the exponent produces one more bend in the curved fitted line.

How many points do you need to fit a curve? ›

The rule of thumb in applied sciences and engineering is you need a minimum 3 points over 3 orders of magnitude for a curve fit.

What is a large enough sample size? ›

In practice, some statisticians say that a sample size of 30 is large enough when the population distribution is roughly bell-shaped. Others recommend a sample size of at least 40.

What is a sufficient sample size? ›

Sufficient sample size is the minimum number of participants required to identify a statistically significant difference if a difference truly exists. Statistical significance does not mean clinical significance.

Why is 30 the minimum sample size? ›

A sample size of 30 often increases the confidence interval of your population data set enough to warrant assertions against your findings. 4 The higher your sample size, the more likely the sample will be representative of your population set.

Is 30 a good sample size for quantitative research? ›

If the research has a relational survey design, the sample size should not be less than 30. Causal-comparative and experimental studies require more than 50 samples. In survey research, 100 samples should be identified for each major sub-group in the population and between 20 to 50 samples for each minor sub-group.

Is 100 a good sample size? ›

The minimum sample size is 100

Most statisticians agree that the minimum sample size to get any kind of meaningful result is 100. If your population is less than 100 then you really need to survey all of them.

How do we know if the model is good enough? ›

There are a variety of metrics for scoring whether a model is “good” or “bad” such as R2, percentage accuracy, mean absolute percentage error (MAPE), and many more. Each of these has advantages and disadvantages, but share one common trait – they are designed to compare, not evaluate performance in a vacuum.

How do you know if a linear model is reasonable? ›

If a linear model is appropriate, the histogram should look approximately normal and the scatterplot of residuals should show random scatter . If we see a curved relationship in the residual plot, the linear model is not appropriate. Another type of residual plot shows the residuals versus the explanatory variable.

How do you tell if a regression model is a good fit in R? ›

A good way to test the quality of the fit of the model is to look at the residuals or the differences between the real values and the predicted values. The straight line in the image above represents the predicted values. The red vertical line from the straight line to the observed data value is the residual.

Is curve fitting the same as regression? ›

In regression analysis, curve fitting is the process of specifying the model that provides the best fit to the specific curves in your dataset. Curved relationships between variables are not as straightforward to fit and interpret as linear relationships.

Why is curve fitting necessary? ›

Fitted curves can be used as an aid for data visualization, to infer values of a function where no data are available, and to summarize the relationships among two or more variables.

Can a linear regression be curved? ›

Linear regression models, while they typically form a straight line, can also form curves, depending on the form of the linear regression equation.

Which of the following methods is most commonly used for fitting a regression line? ›

Statisticians typically use the least squares method (sometimes known as ordinary least squares, or OLS) to arrive at the geometric equation for the line, either through manual calculations or by using software. A straight line will result from a simple linear regression analysis of two or more independent variables.

What is the difference between interpolation and curve fitting? ›

Interpolation is to connect discrete data points so that one can get reasonable estimates of data points between the given points. Curve fitting is to find a curve that could best indicate the trend of a given set of data.

Why do we need least square method? ›

The least squares method is a statistical procedure to find the best fit for a set of data points by minimizing the sum of the offsets or residuals of points from the plotted curve. Least squares regression is used to predict the behavior of dependent variables.

How many points do you need to fit a Gaussian? ›

If one is sampling off of a mathematically-defined curve (e.g. calculating the value of a function at several points) then two points are enough to completely define a normalized Gaussian distribution, and three points are enough to completely define an unnormalized Gaussian distribution.

How many points do you need for an exponential? ›

From a Pair of Points to a Graph

and three units below the x-axis. If you have two points, (x1, y1) and (x2, y2), you can define the exponential function that passes through these points by substituting them in the equation y = abx and solving for a and b.

What is curve fitting with example? ›

For above example, x = v and y = p. The process of finding the equation of the curve of best fit, which may be most suitable for predicting the unknown values, is known as curve fitting. Therefore, curve fitting means an exact relationship between two variables by algebraic equations.

What happens if sample size is less than 30? ›

For example, when we are comparing the means of two populations, if the sample size is less than 30, then we use the t-test. If the sample size is greater than 30, then we use the z-test.

Is sample size 40 enough? ›

You have a symmetric distribution or unimodal distribution without outliers: a sample size of 15 is “large enough.” You have a moderately skewed distribution, that's unimodal without outliers; If your sample size is between 16 and 40, it's “large enough.”

What if my sample size is too small? ›

Too small a sample may prevent the findings from being extrapolated, whereas too large a sample may amplify the detection of differences, emphasizing statistical differences that are not clinically relevant.

Is 100 respondents enough for quantitative research? ›

In most cases, we recommend 40 participants for quantitative studies. If you don't really care about the reasoning behind that number, you can stop reading here. Read on if you do want to know where that number comes from, when to use a different number, and why you may have seen different recommendations.

What sample size do I need for 95 confidence? ›

To be 95% confident that the true value of the estimate will be within 5 percentage points of 0.5, (that is, between the values of 0.45 and 0.55), the required sample size is 385. This is the number of actual responses needed to achieve the stated level of accuracy.

Why must sample size be greater than 30? ›

Sample size equal to or greater than 30 are required for the central limit theorem to hold true. A sufficiently large sample can predict the parameters of a population such as the mean and standard deviation.

Why is small sample size bad? ›

Small samples are bad. Why? If we pick a small sample, we run a greater risk of the small sample being unusual just by chance. Choosing 5 people to represent the entire U.S., even if they are chosen completely at random, will often result if a sample that is very unrepresentative of the population.

What is considered a small sample size? ›

There are appropriate statistical methods to deal with small sample sizes. Although one researcher's “small” is another's large, when I refer to small sample sizes I mean studies that have typically between 5 and 30 users total—a size very common in usability studies.

Is 50 respondents enough for a survey? ›

A sample size consisting of 50-100 respondents will be sufficient for obtaining comprehensive behavioral insights during emotion measurement.

Is 10 participants enough for qualitative research? ›

Ensuring you've hit the right number of participants

In The logic of small samples in interview-based, authors Mira Crouch and Heather McKenzie note that using fewer than 20 participants during a qualitative research study will result in better data.

How do I know how many participants I need for a study? ›

All you have to do is take the number of respondents you need, divide by your expected response rate, and multiple by 100. For example, if you need 500 customers to respond to your survey and you know the response rate is 30%, you should invite about 1,666 people to your study (500/30*100 = 1,666).

Is a sample size of 300 enough? ›

As a general rule, sample sizes of 200 to 300 respondents provide an acceptable margin of error and fall before the point of diminishing returns.

Is 150 a good sample size? ›

The ideal sample size for agile testing

For the type of agile iterative testing we encourage, n=150 is a good, cost-efficient baseline and our platform default. However, if you want greater accuracy or want to analyze your data over multiple sub-groups, you should increase the sample size.

How do you determine the number of samples needed? ›

Follow these steps to calculate the sample size needed for your survey or experiment:
  1. Determine the total population size. First, you need to determine the total number of your target demographic. ...
  2. Decide on a margin of error. ...
  3. Choose a confidence level. ...
  4. Pick a standard of deviation. ...
  5. Complete the calculation.

Is accuracy of 70% good? ›

There is a general rule when it comes to understanding accuracy scores: Over 90% - Very good. Between 70% and 90% - Good. Between 60% and 70% - OK.

What makes a good linear regression model? ›

For a good regression model, you want to include the variables that you are specifically testing along with other variables that affect the response in order to avoid biased results. Minitab Statistical Software offers statistical measures and procedures that help you specify your regression model.

What is a good regression value? ›

What qualifies as a “good” R-Squared value will depend on the context. In some fields, such as the social sciences, even a relatively low R-Squared such as 0.5 could be considered relatively strong. In other fields, the standards for a good R-Squared reading can be much higher, such as 0.9 or above.

How do you know if a regression model is adequately specified? ›

Adequacy Check #1: Plot of straight-line regression model vs. data to visually inspect how well the data fits the line. Adequacy Check #2: Calculation of the coefficient of determination, r2. This value quantifies the percentage of the original uncertainty in the data that is explained by the straight line model.

How do you know if a regression is valid? ›

Some of the methods used for determining the regression validity include:
  1. Comparisons of models theoretical calculations and results.
  2. Comparisons of models coefficients and predictions with theory.
  3. Gathering and incorporating new data to check model predictions.
  4. Cross-validation/Data splitting.
5 Aug 2022

What are the top 5 important assumptions of regression? ›

Linearity: The relationship between X and the mean of Y is linear. Homoscedasticity: The variance of residual is the same for any value of X. Independence: Observations are independent of each other. Normality: For any fixed value of X, Y is normally distributed.

How do you measure the goodness of fit of a regression model? ›

The best way to take a look at a regression data is by plotting the predicted values against the real values in the holdout set. In a perfect condition, we expect that the points lie on the 45 degrees line passing through the origin (y = x is the equation). The nearer the points to this line, the better the regression.

How do you interpret goodness of fit in regression? ›

Like in linear regression, in essence, the goodness-of-fit test compares the observed values to the expected (fitted or predicted) values. vs. Most often the observed data represent the fit of the saturated model, the most complex model possible with the given data.

How do you assess goodness of fit in regression? ›

Dividing that difference by SST gives R-squared. It is the proportional improvement in prediction from the regression model, compared to the mean model. It indicates the goodness of fit of the model. R-squared has the useful property that its scale is intuitive.

How do you assess a model fit? ›

Three statistics are used in Ordinary Least Squares (OLS) regression to evaluate model fit: R- squared, the overall F test, and the Root Mean Square Error (RMSE). All three are based on two sums of squares: Sum of Squares Total (SST) and Sum of Squares Error (SSE).

What is curve fitting with example? ›

For above example, x = v and y = p. The process of finding the equation of the curve of best fit, which may be most suitable for predicting the unknown values, is known as curve fitting. Therefore, curve fitting means an exact relationship between two variables by algebraic equations.

How do you plot a best fit curve in Excel? ›

Right Click on any one of the data points and a dialog box will appear. Click “Add Trendline”; this is what Excel calls a “best fit line”: 16. An options window appears and to ask what type of Trend/Regression type you want.

What does best fit line mean? ›

A line of best fit is a straight line that minimizes the distance between it and some data. The line of best fit is used to express a relationship in a scatter plot of different data points. It is an output of regression analysis and can be used as a prediction tool for indicators and price movements.

How do we know if the model is good enough? ›

There are a variety of metrics for scoring whether a model is “good” or “bad” such as R2, percentage accuracy, mean absolute percentage error (MAPE), and many more. Each of these has advantages and disadvantages, but share one common trait – they are designed to compare, not evaluate performance in a vacuum.

How do you know if a linear regression model is appropriate? ›

If a linear model is appropriate, the histogram should look approximately normal and the scatterplot of residuals should show random scatter . If we see a curved relationship in the residual plot, the linear model is not appropriate. Another type of residual plot shows the residuals versus the explanatory variable.

How do you tell if a regression model is a good fit in R? ›

A good way to test the quality of the fit of the model is to look at the residuals or the differences between the real values and the predicted values. The straight line in the image above represents the predicted values. The red vertical line from the straight line to the observed data value is the residual.

Why is curve fitting necessary? ›

Fitted curves can be used as an aid for data visualization, to infer values of a function where no data are available, and to summarize the relationships among two or more variables.

Is curve fitting the same as regression? ›

In regression analysis, curve fitting is the process of specifying the model that provides the best fit to the specific curves in your dataset. Curved relationships between variables are not as straightforward to fit and interpret as linear relationships.

What is the formula for curve fitting? ›

The highest-order polynomial that Trendline can use as a fitting function is a regular polynomial of order six, i.e., y = ax6 + bx5 +cx4 + ak3 + ex2 +fx + g. polynomials such as y = ax2 + bx3'2 + cx + + e.

Can Excel do curve of best fit? ›

When we have a set of data and we want to determine the relationship between the variables through regression analysis, we can create a curve that best fits our data points. Fortunately, Excel allows us to fit a curve and come up with an equation that represents the best fit curve.

How do you find the slope of the line of best fit on Excel? ›

How to Find the Slope Using Excel (Short Version) - YouTube

How do you find the line of best fit on Desmos? ›

In the input area, type y=a(x-h)^2 + k and press Enter. Sliders will be added for a, h, and k. Adjust the values of the sliders until the graph of the equation most closely fits your data points. You will likely need to change your slider settings.

Why is the regression line known as line of best fit? ›

The regression line is sometimes called the "line of best fit" because it is the line that fits best when drawn through the points. It is a line that minimizes the distance of the actual scores from the predicted scores.

Can a best fit line be curved? ›

A line of best fit may be a straight line or a curve depending on how the points are arranged on the Scatter Graph.

What does an R value of 0.4 mean? ›

For this kind of data, we generally consider correlations above 0.4 to be relatively strong; correlations between 0.2 and 0.4 are moderate, and those below 0.2 are considered weak.

Videos

1. Python 🐍 Nonlinear Regression Curve Fit
(APMonitor.com)
2. 5.3.1-Curve Fitting: Least Squares Regression with Sinusoids
(Jacob Bishop)
3. Curve Fitting in Python (2022)
(Mr. P Solver)
4. Excel 10 (Problem 16) Experimental data analysis, curve fitting, linear regression, trendline.
(Yiheng Wang)
5. Curve fitting method || least square method
(Civil learning online)
6. Curve fitting : Linear Regression A137397
(Ali Zulkefli)
Top Articles
Latest Posts
Article information

Author: Manual Maggio

Last Updated: 02/06/2023

Views: 6054

Rating: 4.9 / 5 (69 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Manual Maggio

Birthday: 1998-01-20

Address: 359 Kelvin Stream, Lake Eldonview, MT 33517-1242

Phone: +577037762465

Job: Product Hospitality Supervisor

Hobby: Gardening, Web surfing, Video gaming, Amateur radio, Flag Football, Reading, Table tennis

Introduction: My name is Manual Maggio, I am a thankful, tender, adventurous, delightful, fantastic, proud, graceful person who loves writing and wants to share my knowledge and understanding with you.