# Lessons in Linear regression- Analytics Edge in python: Week 2

As I have said before, this course is in R- and trying to perform all the tasks in python is proving to be an interesting challenge. I want to document the new and interesting  things, I am learning along the way. It turns out that the way most generic way to run linear regression in R, is to use the lm function. As part of the summary results, you get a bunch of extra information like confidence intervals and p values for each of the coefficients,  adjusted R2, F statistic etc that you don’t get as part of the output-in sklearn -the most popular machine learning package in python. Digging a bit deeper, you can see why:

• scikit-learn is doing machine learning with emphasis on predictive modeling with often large and sparse data.
• statsmodels is doing “traditional” statistics and econometrics, with much stronger emphasis on parameter estimation and (statistical) testing. Its linear models, generalized linear models and discrete models have been around for several years and are verified against Stata and R – and the output parameters are almost identical to what you would get in R.

For this project; where I am trying to translate R to python; statsmodels is a better choice. Statsmodels can be used by importing statsmodels.api, or the statsmodels.formula.api and I have played around with both. Statsmodels.formula.api uses R like syntax as well, but Statsmodels.api has a very sklearn -like syntax. I have explored using linear regression in a few different kinds of datasets: (github repo)

• Climate data
• Sales data for a company
• Detecting flu epidemics via search engine query
• Analyzing Test scores
• State data including things like population, per capita income,murder rates etc

There are a lot of wonderful  blogs/tutorials on Linear Regression. Some of my favorite are:

What is different about Analytics edge is that that is a lot of emphasis on the intuitive understanding of the output of the regression algorithm.

So what is a linear regression model?

This question has confused me at times because :

• y=Bo+B1X1+B2X2
• y=B0+B1(lnX1)
• y=Bo+B1X1+B2X**2   are all linear models, but they are not all linear in X !!

From  what I understand now, a regression model is linear, when it is linear in the parameters. What that means is there is only one parameter in each term of the model and each parameter is a multiplicative constant on the independent variable. So in the above example, (lnX1) is the parameter or variable, and the function is linear in that parameter. Similarly, (X**2) is the variable and the function is linear in that  variable.

Examples of non-linear models:

• y=B1.exp(-B2.X2)+K
• y=B1+(B2-B1)exp(-B3X) +K
• y= (B1X)/(B2+X) +K

A really important analysis to perform before you run any multivariate  linear regression model is to calculate the correlations between different variables -this is if you include because highly correlated variables as independent variables in your model, you would get some unintuitive an d inaccurate answers!! For example, if I run a multiple regression model to predict average global temperature using atmospheric concentrations of most of the greenhouse gases available from the Climatic Research Unit at the University of East Anglia, this is my output: If we examine the output, and look at the coefficients and their corresponding  p_values, ie the P>|t| column, we see that  according to this model, N2O and CH4 which are both considered to be green house gases, are not significant variables!  ie the P>|t| values for those coefficients is higher that the rest.

So what is  P>|t|?                                                                                                                                           P >|t| is  probability that the coefficient for that variable is Zero, given the data used to build the model. The smaller this probability, the less likely the coefficient is actually Zero. We want independent variables with very small values for this. Note that if the coefficient of a variable is almost Zero, that variable is not helping to predict the dependent variable.

Furthermore, the coefficient of N2O is negative, implying that the global temperature is inversely proportional to the atmospheric concentration of N2O. This is contrary to what we know!  So what is going on?

It becomes clear when  we examine the correlations between the different variables: We can see that there is  a really high correlation between N2O and CH4:                 corr(N2O,CH4)=0.89

Ok, so what does that mean?  Multicollinearity refers to the situation when two independent variables are highly correlated – and this can cause the coefficients to have non-intuitive signs. What that means is that if, N2O and CH4 are highly correlated, it may be sufficient to include only one of them in the model. We should look at other highly correlated variables.

Our aim is to find a linear relationship between the fewest independent variables that explain most of the variance.

How do you measure the quality of the regression model?

• Typically when people say linear regression, they are using Ordinary Least Squares (OLS) to fit their model. This is what is also called the L2 norm. The goal is to find the line which minimizes the sum of the squared errors.
• If you are trying to minimize the sum of the absolute errors, you are using the L1 norm.
• The errors or residuals are difference between actual value and  predicted value. Root mean squared error (RMSE ) is preferred over Sum of Squared Errors (SSE) because it scales with n. So if you have double the number of data points, and thus maybe double the  SSE, it does not imply that the model is twice as bad!
• Furthermore, the units of SSE are hard to understand-RMSE is normalized by n and has the same units as the dependent variable.
• R2: the regression coefficient is often used as a measure of how good the model is. R2 basically compares how the model to a  baseline model ( which does not depend on any variable) and is just the mean of the dependent variable (ie a constant). So R2 captures the value added from using a regression model.                                                         R2= 1 – (SSE/SST)      where SST=total sum of squared errors using the baseline model.
• For a single variable linear model, R2= Corr( independent Var, dependent Var) **2.
• When you use multiple variables in your model, you can calculate something called Adjusted R2: the adjusted R2 accounts for the number of independent variables used relative to number of data points in the observation. A model with just one variable, also gives an R2 -but the R2 for a combined linear model is not equal to the sum of R2’s of independent models (of individual independent variables). ie R2 is not additive.
• R2 will always increase as you add more independent variables – but adjusted R2 will decrease if you add an independent variable that does not help the model (for example if the newly added  variable introduces collinearity issues!) This is good way to determine if an additional variable should even be included ! However, adjusted R2 which penalizes model complexity to control for overfitting, generally under penalizes complexity. The best approach to feature selection is actually Cross validation.
• Cross-validation provides a more reliable estimate of out-of-sample error, and thus is a better way to choose which of your models will best generalize to out-of-sample data. There is extensive functionality for cross-validation in scikit-learn, including automated methods for searching different sets of parameters and different models. Importantly, cross-validation can be applied to any model, whereas the methods described above (ie using R2) only apply to linear models.

For skewed distributions it is often useful to predict the logarithm of the dependent variable instead of the dependent variable itself. This prevents  the small number of unusually large or small observations from having an undue effect on the sum of squared errors of predictive models. For example, in the Detecting Flu Epidemics via Search engine query data  python notebook, a histogram of  the percentage of Influenza like Illnesses (ILI) related physician visits is: suggesting the distribution is right skewed- ie most of the ILI values are small with a relatively large number of values being large. However a plot of the natural log of ILI vs Queries shows that their probably is a positive linear relation between ln(ILI) and Queries.

: Time Series Model:

In Google Flu trends problem set, initially we attempt to model log(ILI) as a function of Queries: but since the observations in this dataset are consecutive weekly measurements of the dependent and independent variable, it can be thought of as a time-series.  Often Statistical models can be improved by predicting current value of the dependent variable using past values. Since most the ILI data has a 1-2 week lag- we need will use data from 2 weeks ago and earlier. Well how many such data points are there -ie  many values are missing from this variable ?  Calculating lag: Plotting the log of ILI_lag2 vs log IL1, we notice a strong linear relationship. We can thus use both Queries and log(ILI_lag2), to predict log(ILI) and this significantly improves the model- increasing the R2 from 0.70 to 0.90. and the RMSE on the test set also reduces significantly.

Running regression models using all numerical variables is interesting, but one can use Categorical variables as well in the model!

Encoding Categorical Variables : To include categorical features in a Linear regression model, we would need to convert the categories into integers. In python, this functionality is available in DictVectorizer from scikit-learn, or “get_dummies()” function. One Hot encoder is another option. The difference is as follows:

1. OneHotEncoder takes as input categorical values encoded as integers – you can get them from LabelEncoder.
2. DictVectorizer expects data as a list of dictionaries, where each dictionary is a data row with column names as keys

One can also use Patsy (another python library) or get_dummies() see- Reading test scores. If you are using  statsmodels, you could also use the  C() function (see Forecasting Elantra sales)

Pandas ‘get_dummies’ is I think the easiest option. For example, in the Reading Test Scores Assignment, we  wish to use the categorical variable ‘Race’ which has the following values: Using get_dummies: But we need only k-1 = 5 dummy variables for  the Race categorical variable. The reason is if we were to make k dummies for any of your variables, we would have a collinearity. One  can think of the k-1 dummies as being contrasts between the effects of their corresponding levels, and the level whose dummy is left out. Usually the most frequent category is is used as the reference level.

Other great Resources:

# edX’s Analytics Edge in Python: exploring the power of pandas ‘groupby’ and value_counts!

There is an amazing course for beginners in Data Science on edX by MIT: Analytics Edge. The material is great – the assignments are plentiful and I think its great practice – my only problem is that its in R-and I have decided to focus more on python. I decided it would be an interesting exercise to try and complete all the assignments in python  -and boy it it has been so worthwhile! I’ve had to hunt around for R-equivalent code/ syntax and realized that there are some things that are so simple in R but convoluted in python!

I will be posting all my python notebooks on github (see github repo: Analytiq Edge in Python), along with the associated data files as well as the assignment questions. This blog post deals with data analysis of assignments posted in Week 1.

I had not understood the power of ‘value_counts’  and ‘groupby’ command in python. They are really useful and powerful. For example, in the Analytical detective notebook, where we are analyzing Chicago street crime data from 2001-2012, and we need to figure out which month had the  most arrests, one can create a ‘month’ column using the lambda function and then plot the value_counts.  To find what the trends were over the 12 years -its useful to create a boxplot of the variable “Date”, sorted by the variable “Arrest” showing that the number of arrests made in the first half of the time period are significantly more, though total number of crimes is more similar over the first and second half of the time period. Another way to check if that makes sense, is to plot the number of arrests by year: groupby- with MultiIndex

With hierarchically indexed data, one can group by one of the levels of the hierarchy. This can be very useful. For example, to answer the question:  “On which day of the week do the most motor vehicle thefts at gas stations happen? “,  we can first define a new dataframe as: and then groupby level 0 and then sum-note we are not asking when most arrests happen, but most thefts happen-so we need the sum of arrests and no arrests! Pretty cool huh?

Assignment using ‘Demographics and Employments in the US’ dataset also uses some neat usage of groupby. For example to find ‘How many states had all interviewees living in a non-metropolitan area (aka they have a missing MetroAreaCode value)?’, one can do and if one wants just the list of states: To get how many states had all interviewees living in a metropolitan area, ie urban and all rural: which region of the US had largest proportion of interviewees living in a non metropolitan area? One can even find proportions: The dataset with stock prices ( see Stock_dynamics.ipynb)  is really useful the play around with different plotting routines.  Visualizing the stock prices of the five companies over  a 10 year span -and seeing what happened after the Oct 1997 crash: Using groupby to plot monthly trends: Pretty cool I think!

Take  a look at the datasets- and have fun!

# Moving to AWS…

As the project grew, we started downloading tweets from various journal websites and tried to set up an algorithm to parse tweets that were related to particular papers and link them to the paper; thereby producing another “metric” to compare papers by. In addition; we started keeping a detailed records all the authors of the papers and attempted to create a citation database. As complexity grew; we found that our local server was too slow and after some research; decided to take the plunge and move our stuff to AWS-the Amazon Web Server. We created a VPC (a virtual Private cloud), moved our database to Amazon’s RDS (Relational Database Service) and created buckets or  storage on Amazon’s S3 (Simple Storage Service). Its relatively easy to do and the AWS documentation is pretty good. What I found really helpful were the masterclass webinar series.

I launched a Linux based instance and then installed all the software versions I needed like python2.7, pandas, numpy, ipython-pylab, matplotlib, scipy etc . It was interesting to note that on many of the amazon machine, the default python version loaded was 2.6, not 2.7. I scouted  the web a fair bit to help me configure my instance the am sharing soem of the commands below

General commands to install python 2.7  on AWS- should work on most instances running Ubuntu/RedHat Linux:

Start python to check the installation unicode type. If you have to deal with a fair amount of unicode data like I do then make sure you have the “wide build” . I learned this the hard way.

• >>import sys
• >>print sys.maxunicode

It should NOT be 65564

• >>wget https://s3.amazonaws.com/aws-cli/awscli-bundle.zip
•   >> unzip awscli-bundle.zip
• >> sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
• # install build tools
• >>sudo yum install make automake gcc gcc-c++ kernel-devel git-core -y
• # install python 2.7 and change default python symlink
• >>sudo yum install python27-devel -y
• >>sudo rm /usr/bin/python
• >>sudo ln -s /usr/bin/python2.7 /usr/bin/python
• # yum still needs 2.6, so write it in and backup script
•  >>sudo cp /usr/bin/yum /usr/bin/_yum_before_27
•  >>sudo sed -i s/python/python2.6/g /usr/bin/yum
• #This  should display now 2.7.5 or later:                                                                                                       >>python
•   >>sudo yum install httpd
• # now install pip for 2.7
• >>sudo curl -o /tmp/ez_setup.py https://bitbucket.org/pypa/setuptools/raw/bootstrap/ez_setup.py
• >>sudo /usr/bin/python27 /tmp/ez_setup.py
• >>sudo /usr/bin/easy_install-2.7 pip
• >>sudo pip install virtualenv
• >>sudo apt-get update
•  >>sudo apt-get install git
• # should display current versions:                                                                                                                   pip -V && virtualenv –version
• Installing all the python library modules:
• sudo pip install ipython
• sudo yum install numpy scipy python-matplotlib ipython python-pandas sympy python-nose
• sudo yum install xorg-x11-xauth.x86_64 xorg-x11-server-utils.x86_64
• sudo pip install pyzmq tornado jinja2
• sudo yum groupinstall “Development Tools”
• sudo yum install python-devel
• sudo pip install matplotlib
• sudo pip install networkx
• sudo pip install cython
• sudo pip install boto
• sudo pip install pandas
• Some modules could Not be loaded using pip, so use the following instead:                       >>sudo apt-get install python-mpi4py python-h5py python-tables python-pandas python-sklearn python-scikits.statsmodels
• Note that to install h5py or pytables you must install the following dependencies first:
• -numpy
• -numexpr
• -Cython
• -dateutil
• HDF5
• HDF5 can be installed using wget:
• >> wget http://www.hdfgroup.org/ftp/HDF5/current/src/hdf5-1.8.9.tar.gz
•  >>tar xvfz hdf5-1.8.9.tar.gz; cd hdf5-1.8.9
•   >> ./configure –prefix=/usr/local
• >> make; make install
• Pytables
• pip install git+https://github.com/PyTables/PyTables.git@v.3.1.1#egg=tables

**to install h5py make sure hdf5 is in the path.

I really liked the concept of AMI’s : you create a machine that has the configuration that you want and then create an “image” and can give it a name. You have then created a virtual machine that you can launch anytime you want and as many copies of as you want! What is also great is that when you create the image, all the files that may be in your directory at that time are also part of that virtual machine …so its almost like you have a  snapshot of your environment at the time. So create images often (you can delete old images ofcourse!) and you can launch your environment at any time.  Of course you can and should always upload new files to S3 as that it you storage repository.

Another neat trick was to learn that if you can install EC (Elastic Cloud) CLI(Command Line Interface) on your local machine, and set up your IAM you don’t need the you don’t need the .pem file!! Even if you are accessing an AMI in a private cloud; when you set up the instance make sure you click on “get public IP” when you launch your instance and log in as ssh –Y ubuntu@ ec2-xx.yyy.us-west1.compute.amazonaws.com. You can them enjoy xterm and the graphical display from matplotlib, as if you are running everything  on your local terminal. Very cool indeed!

# Learning Python, DataScience – some great resources.

There are some AMAZING resources online to learn python and Data Science in general. Here are an incomplete list of sites/pages I like to go to. I have started using Pandas/ SciPy a lot and love it.  I plan to keep adding to this page.

Python:

Data Science

Using R