You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
bike-sharing-neural-network/dlnd-your-first-neural-netw...

1003 lines
505 KiB
Plaintext

7 years ago
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Your first neural network\n",
"\n",
"In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.\n",
"\n"
]
},
{
"cell_type": "code",
7 years ago
"execution_count": 14,
7 years ago
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%matplotlib inline\n",
"%config InlineBackend.figure_format = 'retina'\n",
"\n",
"import numpy as np\n",
"import pandas as pd\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load and prepare the data\n",
"\n",
"A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!"
]
},
{
"cell_type": "code",
7 years ago
"execution_count": 15,
7 years ago
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"data_path = 'Bike-Sharing-Dataset/hour.csv'\n",
"\n",
"rides = pd.read_csv(data_path)"
]
},
{
"cell_type": "code",
7 years ago
"execution_count": 16,
7 years ago
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>instant</th>\n",
" <th>dteday</th>\n",
" <th>season</th>\n",
" <th>yr</th>\n",
" <th>mnth</th>\n",
" <th>hr</th>\n",
" <th>holiday</th>\n",
" <th>weekday</th>\n",
" <th>workingday</th>\n",
" <th>weathersit</th>\n",
" <th>temp</th>\n",
" <th>atemp</th>\n",
" <th>hum</th>\n",
" <th>windspeed</th>\n",
" <th>casual</th>\n",
" <th>registered</th>\n",
" <th>cnt</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>1</td>\n",
" <td>2011-01-01</td>\n",
" <td>1</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>6</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>0.24</td>\n",
" <td>0.2879</td>\n",
" <td>0.81</td>\n",
" <td>0.0</td>\n",
" <td>3</td>\n",
" <td>13</td>\n",
" <td>16</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>2</td>\n",
" <td>2011-01-01</td>\n",
" <td>1</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>1</td>\n",
" <td>0</td>\n",
" <td>6</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>0.22</td>\n",
" <td>0.2727</td>\n",
" <td>0.80</td>\n",
" <td>0.0</td>\n",
" <td>8</td>\n",
" <td>32</td>\n",
" <td>40</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>3</td>\n",
" <td>2011-01-01</td>\n",
" <td>1</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>2</td>\n",
" <td>0</td>\n",
" <td>6</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>0.22</td>\n",
" <td>0.2727</td>\n",
" <td>0.80</td>\n",
" <td>0.0</td>\n",
" <td>5</td>\n",
" <td>27</td>\n",
" <td>32</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>4</td>\n",
" <td>2011-01-01</td>\n",
" <td>1</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>3</td>\n",
" <td>0</td>\n",
" <td>6</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>0.24</td>\n",
" <td>0.2879</td>\n",
" <td>0.75</td>\n",
" <td>0.0</td>\n",
" <td>3</td>\n",
" <td>10</td>\n",
" <td>13</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>5</td>\n",
" <td>2011-01-01</td>\n",
" <td>1</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>4</td>\n",
" <td>0</td>\n",
" <td>6</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>0.24</td>\n",
" <td>0.2879</td>\n",
" <td>0.75</td>\n",
" <td>0.0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>1</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" instant dteday season yr mnth hr holiday weekday workingday \\\n",
"0 1 2011-01-01 1 0 1 0 0 6 0 \n",
"1 2 2011-01-01 1 0 1 1 0 6 0 \n",
"2 3 2011-01-01 1 0 1 2 0 6 0 \n",
"3 4 2011-01-01 1 0 1 3 0 6 0 \n",
"4 5 2011-01-01 1 0 1 4 0 6 0 \n",
"\n",
" weathersit temp atemp hum windspeed casual registered cnt \n",
"0 1 0.24 0.2879 0.81 0.0 3 13 16 \n",
"1 1 0.22 0.2727 0.80 0.0 8 32 40 \n",
"2 1 0.22 0.2727 0.80 0.0 5 27 32 \n",
"3 1 0.24 0.2879 0.75 0.0 3 10 13 \n",
"4 1 0.24 0.2879 0.75 0.0 0 1 1 "
]
},
7 years ago
"execution_count": 16,
7 years ago
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"rides.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Checking out the data\n",
"\n",
"This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the `cnt` column. You can see the first few rows of the data above.\n",
"\n",
"Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model."
]
},
{
"cell_type": "code",
7 years ago
"execution_count": 17,
7 years ago
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
7 years ago
"<matplotlib.axes._subplots.AxesSubplot at 0x7fa490564c18>"
7 years ago
]
},
7 years ago
"execution_count": 17,
7 years ago
"metadata": {},
"output_type": "execute_result"
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAABBoAAALzCAYAAAC/R2QvAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAewgAAHsIBbtB1PgAAIABJREFUeJzs3Xu0ZHV55//P95zTzaGhuQittN1KUEFhElcQlBjNj4iY\n6BgZNTomJBnFqMREXIyuhESjXMyKK4NrVMJEFC/gTBxiBk1iJJhBRMcLChIXysUIotJtg402Td+b\nPvX9/bFrd33P7r2rdlXty/Pd9X6tddap06dOnV3Vdfbls5/n2c57LwAAAAAAgCrMtb0AAAAAAACg\nOwgaAAAAAABAZQgaAAAAAABAZQgaAAAAAABAZQgaAAAAAABAZQgaAAAAAABAZQgaAAAAAABAZQga\nAAAAAABAZQgaAAAAAABAZQgaAAAAAABAZQgaAAAAAABAZQgaAAAAAABAZQgaAAAAAABAZQgaAAAA\nAABAZQgaAAAAAABAZQgaAAAAAABAZQgaAAAAAABAZQgaAAAAAABAZQgaAAAAAABAZQgaAAAAAABA\nZcwHDc65U5xz73DOfc45d79zbrdzbptz7rvOuY86555T4jFe7Zzrlfz4LyUe72Dn3J84577hnPup\nc267c+4u59x7nHNPrOaZAwAAAAAQn4W2F2AY59yXJD23/6UPvrVC0lMkHS/pNc65/ynpdd77R0c8\npB/x/TLL9BRJ1/V/f/h4J0h6qqTXOed+x3v/2Wl/FwAAAAAAsTEdNEhaq+Rg/seS/l7S/5P0I0nz\nkp4t6a2S1kn6vf6//W6Jx/w1SZuGfH9D0Tecc4dK+qwGIcOHJP2dpF2SnifpzyQdJuka59xzvPe3\nl1geAAAAAAA6w3k/9Un+2jjn/knS1ZI+5XMW1Dn3GElfVVJN4CWd7r3/cs79Xi3pY/37HOe9/9GE\ny3OJpD/vP84fe+//e+b7vyTpS0pCjy9678+Y5PcAAAAAABAr0zMavPdnee+vzQsZ+t//mZKqhtQr\n6loW59yCpPOUhAx3ZUOG/vLcLOkjkpyk051zp9S1PAAAAAAAWGQ6aCjppuD2k2v8Pc+TdHj/9tVD\n7ndVcPtltS0NAAAAAAAGdSFoWBncXqrx9zw3uP3FIfe7VdKO/u2RV8QAAAAAAKBLuhA0/Gpw+64S\n97/KObfRObfHObfZOfc159y7nHOPH/FzJwW37y66k/d+SdK9StonTiyxPAAAAAAAdEbUQYNzzkm6\nIPinT5b4sdMlHaPkihuPkfQsSW+XdI9z7g1Dfm59//MO7/0jI37H/f3Pa5xzK0osEwAAAAAAnWD9\n8pajvEVJUOAlXeu9/7ch971X0rWSbtYgCHiSpN9UMkRyUdIHnHM97/2Hc35+df/z9hLLtSO4faik\nLSV+BgAAAACA6Jm+vOUwzrnTJf1fJWHJA5Ke7r1/qOC+q73324Y81n+U9On+Y+2U9GTv/U8y97lH\nSTDxI+/9z41Ytqsl/Z6SAOQJ3vsfl31eAAAAAADELMrWCefcf5D0KSXBwC5JrywKGSRpWMjQ//51\nki5RMldhlaTfz7nb7v7nlTnfyzoouL2rxP0BAAAAAOiE6FonnHPHSfqcpCMl7ZP0Ku/9Vyp46A8p\nCRukZI7DuzPfT8OKQ0s81iHB7TKtFvs553YoCSq8pJ+V+JElSb1xfgcAAAAAYCbMSZovcb/HKDnx\nvsd7f8ioO48SVdDQvzLEDZIer+Tg+hzv/T9X8dje+83OuZ9KOkrSupy7bJB0mqRDnHOHjRgI+YT+\n583e+0fHXJSDNHgjPHbMnwUAAAAAYFIHjb7LaNEEDc65o5TMZDhOydn+N3nv/7biXzNsYMWdSgZH\nStLTJH0j707OuXlJT+4/VpnLbRYuw5o1a0beeX5+XvPzZQIqIB579+7V5s2btWbNGq1cWaZbCegm\n/hYA/g6AFH8LmMTS0pKWlpZG3u+hhx5Sf35jJdXyUQQNzrnDJP2rpBOVHIhf4L2/ouLfcbSko/tf\n5g1v/HJw+3QVBA2STlXSOuElTdLS8TNJj12zZo1+8pOfjLwz0EW33XabTjnlFF1//fV6xjOe0fbi\nAK3hbwHg7wBI8beAOq1fv14bN26UpEoOQs0Pg3TOHSzpOkknKzl4/wvv/Xtq+FXnKulJkaQv5nz/\nJklb+7dfPeRxzgluf3r6xQIAAAAAIB6mgwbn3ApJ/yDpl5WEDO/z3l845mMc65z7xRH3+Q1J7+h/\nuVvSx7L36c9auExJGHGic+6tOY/zbEmv7S/rTd77b46zrAAAAAAAxM5668Q1kl6g5MD9Rkkf7V/a\nsshe7/33Mv/2c5K+4Jz7mqTPSPqWknIQJ+lJkl6pZPaC6/+et3rvNxU8/qWSXiXpBEmXOueO7y/j\nLklnSPozJa/pTknnj/VMAQAAAADoAOtBw8v6n52k50v69oj7/0BJeJDlJf2SpGcX/JyXtEPS+d77\njxQ9uPd+u3PuxZI+K+l4SW/of4SPs1XS2d77UcsKAAAAAEDnWA8ahl0Fouz9vynpd5WEDKdKWqtk\n6OOCpC2S7pD0eUkf9t4/NPIXeH+vc+5kSX+kpBriKZJWSrpfSQBxmff+/jGXGwAAAACATjAdNHjv\np75uo/d+u6T/3f+ohPd+l6T39D8AAAAAAECf6WGQAAAAAAAgLgQNAAAAAACgMgQN9ixJ0vz81F0j\nQLTWrl2rCy+8UGvXrm17UYBW8bcA8HcApPhbQEyc9+POW0SdnHMbJK1bt26dNmzY0PbiAAAAAAA6\nbv369dq4caMkbfTer5/28ahoAAAAAAAAlSFoAAAAAAAAlSFoAAAAAAAAlVloewEAAAAAwJJTTz1V\nDzzwQNuLAYzlmGOO0a233tr2YkgiaAAAAACAZR544IF0MB6ACRA0AAAAAECOubk5LicJ8zZt2qRe\nr9f2YixD0AAAAAAAOdauXcsl52FecGlKMxgGCQAAAAAAKkPQAAAAAAAAKkPQAAAAAAAAKkPQAAAA\nAAAAKkPQAAAAAAAAKkPQAAAAAAAAKkPQAAAAAAAAKkPQAAAAAAAAKkPQAAAAAAAAKkPQAAAAAAAA\nKkPQAAAAAAAAKkPQAAAAAAAAKkPQAAAAAABAja6++mrNzc1pbm5Or33ta9tenNoRNAAAAAAA0ADn\nXNuL0AiCBgAAAAAAUBmCBgAAAAAAUBmCBgAAAAAAauS9b3sRGkXQAAAAAADonG3btunyyy/XWWed\npeOOO06rV6/W4uKi1q1bpzPPPFOXXHKJ7rzzzgN+7pxzztk/uPHjH/+4JGnnzp36m7/5G/3Kr/yK\njjnmGC0uLuqJT3yizj77bH31q18tXIbXvOY1ywZAeu911VVX7X/88OOMM86o54VowULbCwAAAAAA\nQJWuuOIKvf3tb9eWLVskLR/C+MADD2jTpk268cYbddFFF+n666/Xr/3arx3wGOnP3H333Xr5y1+u\nu+++e9njbNiwQddcc42uueYaXXjhhbrwwgtzHyP9mbSqYRYGQhI0AAAAAAA6481vfrMuv/zy/Qf5\n8/PzeuYzn6njjz9ei4uL2rx5s771rW/pBz/4gSRp9+7dhY+1ceNGnXnmmdq0aZOOPPLI/RUNDz30\nkG688UZt3bpVknTJJZfopJNO0itf+cplP/+CF7xAq1ev1t13360bbrhBzjk97WlP0/Of//wDftfx\nxx9f3YvQMoIGAAAAAEAnXHHFFftDBkl61atepUsvvVTr1q074L533nmnrrzySq1atarw8S655BLt\n3btXF1xwgd75zndqcXFx//cefvhhveIVr9CNN94o55ze9ra3HRA0nH322Tr77LN19dVX64YbbpAk\nnXbaabrsssuqeLpmMaMBAAAAABC9hx9+WBdccMH+kOGNb3yjPvGJT+SGDJJ00kkn6b3vfa/OPPPM\n3O9777V371697W1v01/+5V8uCxkk6YgjjtAnPvEJHXLIIfLe6/vf/75uueWWap9UpAgaAAAAAADR\n+9CHPqRt27bJe69jjz1
"text/plain": [
7 years ago
"<matplotlib.figure.Figure at 0x7fa490163a90>"
7 years ago
]
},
"metadata": {
"image/png": {
"height": 377,
"width": 525
}
},
"output_type": "display_data"
}
],
"source": [
"rides[:24*10].plot(x='dteday', y='cnt')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Dummy variables\n",
"Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to `get_dummies()`."
]
},
{
"cell_type": "code",
7 years ago
"execution_count": 18,
7 years ago
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>yr</th>\n",
" <th>holiday</th>\n",
" <th>temp</th>\n",
" <th>hum</th>\n",
" <th>windspeed</th>\n",
" <th>casual</th>\n",
" <th>registered</th>\n",
" <th>cnt</th>\n",
" <th>season_1</th>\n",
" <th>season_2</th>\n",
" <th>...</th>\n",
" <th>hr_21</th>\n",
" <th>hr_22</th>\n",
" <th>hr_23</th>\n",
" <th>weekday_0</th>\n",
" <th>weekday_1</th>\n",
" <th>weekday_2</th>\n",
" <th>weekday_3</th>\n",
" <th>weekday_4</th>\n",
" <th>weekday_5</th>\n",
" <th>weekday_6</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0.24</td>\n",
" <td>0.81</td>\n",
" <td>0.0</td>\n",
" <td>3</td>\n",
" <td>13</td>\n",
" <td>16</td>\n",
" <td>1</td>\n",
" <td>0</td>\n",
" <td>...</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0.22</td>\n",
" <td>0.80</td>\n",
" <td>0.0</td>\n",
" <td>8</td>\n",
" <td>32</td>\n",
" <td>40</td>\n",
" <td>1</td>\n",
" <td>0</td>\n",
" <td>...</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0.22</td>\n",
" <td>0.80</td>\n",
" <td>0.0</td>\n",
" <td>5</td>\n",
" <td>27</td>\n",
" <td>32</td>\n",
" <td>1</td>\n",
" <td>0</td>\n",
" <td>...</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0.24</td>\n",
" <td>0.75</td>\n",
" <td>0.0</td>\n",
" <td>3</td>\n",
" <td>10</td>\n",
" <td>13</td>\n",
" <td>1</td>\n",
" <td>0</td>\n",
" <td>...</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0.24</td>\n",
" <td>0.75</td>\n",
" <td>0.0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>1</td>\n",
" <td>1</td>\n",
" <td>0</td>\n",
" <td>...</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"<p>5 rows × 59 columns</p>\n",
"</div>"
],
"text/plain": [
" yr holiday temp hum windspeed casual registered cnt season_1 \\\n",
"0 0 0 0.24 0.81 0.0 3 13 16 1 \n",
"1 0 0 0.22 0.80 0.0 8 32 40 1 \n",
"2 0 0 0.22 0.80 0.0 5 27 32 1 \n",
"3 0 0 0.24 0.75 0.0 3 10 13 1 \n",
"4 0 0 0.24 0.75 0.0 0 1 1 1 \n",
"\n",
" season_2 ... hr_21 hr_22 hr_23 weekday_0 weekday_1 weekday_2 \\\n",
"0 0 ... 0 0 0 0 0 0 \n",
"1 0 ... 0 0 0 0 0 0 \n",
"2 0 ... 0 0 0 0 0 0 \n",
"3 0 ... 0 0 0 0 0 0 \n",
"4 0 ... 0 0 0 0 0 0 \n",
"\n",
" weekday_3 weekday_4 weekday_5 weekday_6 \n",
"0 0 0 0 1 \n",
"1 0 0 0 1 \n",
"2 0 0 0 1 \n",
"3 0 0 0 1 \n",
"4 0 0 0 1 \n",
"\n",
"[5 rows x 59 columns]"
]
},
7 years ago
"execution_count": 18,
7 years ago
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']\n",
"for each in dummy_fields:\n",
" dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)\n",
" rides = pd.concat([rides, dummies], axis=1)\n",
"\n",
"fields_to_drop = ['instant', 'dteday', 'season', 'weathersit', \n",
" 'weekday', 'atemp', 'mnth', 'workingday', 'hr']\n",
"data = rides.drop(fields_to_drop, axis=1)\n",
"data.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Scaling target variables\n",
"To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.\n",
"\n",
"The scaling factors are saved so we can go backwards when we use the network for predictions."
]
},
{
"cell_type": "code",
7 years ago
"execution_count": 19,
7 years ago
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']\n",
"# Store scalings in a dictionary so we can convert back later\n",
"scaled_features = {}\n",
"for each in quant_features:\n",
" mean, std = data[each].mean(), data[each].std()\n",
" scaled_features[each] = [mean, std]\n",
" data.loc[:, each] = (data[each] - mean)/std"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Splitting the data into training, testing, and validation sets\n",
"\n",
"We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders."
]
},
{
"cell_type": "code",
7 years ago
"execution_count": 20,
7 years ago
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Save the last 21 days \n",
"test_data = data[-21*24:]\n",
"data = data[:-21*24]\n",
"\n",
"# Separate the data into features and targets\n",
"target_fields = ['cnt', 'casual', 'registered']\n",
"features, targets = data.drop(target_fields, axis=1), data[target_fields]\n",
"test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set)."
]
},
{
"cell_type": "code",
7 years ago
"execution_count": 21,
7 years ago
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Hold out the last 60 days of the remaining data as a validation set\n",
"train_features, train_targets = features[:-60*24], targets[:-60*24]\n",
"val_features, val_targets = features[-60*24:], targets[-60*24:]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Time to build the network\n",
"\n",
"Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.\n",
"\n",
"The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called *forward propagation*.\n",
"\n",
"We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called *backpropagation*.\n",
"\n",
"> **Hint:** You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.\n",
"\n",
"Below, you have these tasks:\n",
"1. Implement the sigmoid function to use as the activation function. Set `self.activation_function` in `__init__` to your sigmoid function.\n",
"2. Implement the forward pass in the `train` method.\n",
"3. Implement the backpropagation algorithm in the `train` method, including calculating the output error.\n",
"4. Implement the forward pass in the `run` method.\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Shape of weight matrices\n",
"\n",
" Node j1 -> | w1|w2|w3|....wn\n",
" ---------------|--|-------\n",
" Node j2 -> | w1|w2|w3|....wn\n",
" ---------------|--|-------\n",
" Node j3 -> | w1|w2|w3|....wn\n",
" . ...\n",
" . ...\n",
" Node jm ...\n",
" \n"
]
},
{
"cell_type": "code",
7 years ago
"execution_count": 29,
7 years ago
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"class NeuralNetwork(object):\n",
" def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n",
" # Set number of nodes in input, hidden and output layers.\n",
" self.input_nodes = input_nodes\n",
" self.hidden_nodes = hidden_nodes\n",
" self.output_nodes = output_nodes\n",
" \n",
" #print(\"DEBUG: hidden units \",self.hidden_nodes)\n",
" \n",
"\n",
" # Initialize weights\n",
" self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5, \n",
" (self.hidden_nodes, self.input_nodes)) # shape hidden_n x n_features\n",
"\n",
" self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5, \n",
7 years ago
" (self.output_nodes, self.hidden_nodes)) # shape 1 x hidden_n\n",
7 years ago
" self.lr = learning_rate\n",
" \n",
" #### Set this to your implemented sigmoid function ####\n",
" # Activation function is the sigmoid function\n",
" self.activation_function = self._sigmoid\n",
" self.hidden_gradient_function = self._gradient_sigmoid\n",
" self.output_activation = lambda x: x\n",
" \n",
" def _sigmoid(self, x):\n",
" return 1 / (1 + np.exp(-x))\n",
" \n",
" def _gradient_sigmoid(self, output):\n",
" return output * (1-output)\n",
" \n",
" \n",
" def train(self, inputs_list, targets_list):\n",
" # Convert inputs list to 2d array\n",
" inputs = np.array(inputs_list, ndmin=2).T\n",
" targets = np.array(targets_list, ndmin=2).T\n",
" \n",
" #### Implement the forward pass here ####\n",
" ### Forward pass ###\n",
" # TODO: Hidden layer\n",
" \n",
" # signal into hidden layer\n",
" hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # shape: hidden_n x n . nx1 -> hidden_n x 1\n",
" # signals from hidden layer\n",
" hidden_outputs = self.activation_function(hidden_inputs) # shape: hidden_n x 1\n",
" \n",
" # TODO: Output layer\n",
" \n",
" # signals into final output layer\n",
" final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # 1 x hidden_n . hidden_n x 1 -> 1x1\n",
" # signals from final output layer - same, h(f(x)) = f(x)\n",
" final_outputs = self.output_activation(final_inputs) # shape 1x1\n",
" \n",
" #### Implement the backward pass here ####\n",
" ### Backward pass ###\n",
" \n",
" # TODO: Output error\n",
" \n",
" # Output layer error is the difference between desired target and actual output.\n",
" output_errors = targets - final_outputs\n",
" \n",
" # since the gradient of f(x) = x is 1, the error and output error are the same \n",
" \n",
" # TODO: Backpropagated error\n",
" \n",
" # errors propagated to the hidden layer\n",
7 years ago
" # shape: 1xhidden_n . 1x1\n",
" hidden_errors = np.multiply(self.weights_hidden_to_output, output_errors) # shape 1 x hidden_n\n",
7 years ago
" \n",
" # hidden layer gradients\n",
" # Calculate using sigmoid gradient of the hidden output\n",
7 years ago
" #shape hidden_nx1 \n",
" hidden_grad = self.hidden_gradient_function(hidden_outputs)\n",
7 years ago
" \n",
" # TODO: Update the weights\n",
" \n",
7 years ago
" # update hidden-to-output weights with gradient descent step \n",
" self.weights_hidden_to_output += np.dot(self.lr, output_errors).dot(hidden_outputs.T) #shape hidden_nx1\n",
"\n",
7 years ago
" \n",
" # update input-to-hidden weights with gradient descent step\n",
7 years ago
" # shape hidden_n x input_n += (hidden_nx1 * hidden_nx1) . (1x1 . inputs_nx1.T) \n",
" self.weights_input_to_hidden += np.multiply(hidden_grad, hidden_errors.T).dot(self.lr).dot(inputs.T) \n",
7 years ago
" \n",
" def run(self, inputs_list):\n",
" # Run a forward pass through the network\n",
" inputs = np.array(inputs_list, ndmin=2).T\n",
" \n",
" #### Implement the forward pass here ####\n",
" # TODO: Hidden layer\n",
" hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)# signals into hidden layer\n",
" hidden_outputs = self.activation_function(hidden_inputs)# signals from hidden layer\n",
" \n",
" # TODO: Output layer\n",
" final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)# signals into final output layer\n",
" final_outputs = self.output_activation(final_inputs)# signals from final output layer\n",
" return final_outputs"
]
},
{
"cell_type": "code",
7 years ago
"execution_count": 23,
7 years ago
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def MSE(y, Y):\n",
" return np.mean((y-Y)**2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Training the network\n",
"\n",
"Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.\n",
"\n",
"You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.\n",
"\n",
"### Choose the number of epochs\n",
"This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.\n",
"\n",
"### Choose the learning rate\n",
"This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\n",
"\n",
"### Choose the number of hidden nodes\n",
"The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose."
]
},
{
"cell_type": "code",
7 years ago
"execution_count": 28,
7 years ago
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
7 years ago
"Progress: 99.9% ... Training loss: 0.067 ... Validation loss: 0.162"
7 years ago
]
}
],
"source": [
"import sys\n",
"\n",
"### Set the hyperparameters here ###\n",
7 years ago
"epochs = 1000\n",
7 years ago
"#learning_rate = 0.03\n",
7 years ago
"learning_rate = 0.07\n",
7 years ago
"N_i = train_features.shape[1]\n",
"hidden_nodes = int(N_i / 1.2)\n",
"output_nodes = 1\n",
"\n",
"\n",
"network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n",
"\n",
"losses = {'train':[], 'validation':[]}\n",
"for e in range(epochs):\n",
" # Go through a random batch of 128 records from the training data set\n",
" batch = np.random.choice(train_features.index, size=128)\n",
" for record, target in zip(train_features.ix[batch].values, \n",
" train_targets.ix[batch]['cnt']):\n",
" network.train(record, target)\n",
" \n",
" # Printing out the training progress\n",
" train_loss = MSE(network.run(train_features), train_targets['cnt'].values)\n",
" val_loss = MSE(network.run(val_features), val_targets['cnt'].values)\n",
" sys.stdout.write(\"\\rProgress: \" + str(100 * e/float(epochs))[:4] \\\n",
" + \"% ... Training loss: \" + str(train_loss)[:5] \\\n",
" + \" ... Validation loss: \" + str(val_loss)[:5])\n",
" \n",
" losses['train'].append(train_loss)\n",
" losses['validation'].append(val_loss)"
]
},
{
"cell_type": "code",
7 years ago
"execution_count": 25,
7 years ago
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"(0.0, 0.8)"
]
},
7 years ago
"execution_count": 25,
7 years ago
"metadata": {},
"output_type": "execute_result"
},
{
"data": {
7 years ago
"image/png": "iVBORw0KGgoAAAANSUhEUgAABBwAAALJCAYAAAAXuDCjAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAewgAAHsIBbtB1PgAAIABJREFUeJzs3Xd4VNX2N/DvnjQCSUhIqKEIFgzSO9JRFOki8JMiSFGQ\nC/IiXEVQUAQULqiIqFxREUVsFxAQUZCiNDEgAgZEpSYhQAKBBFJn9vvHZE6mz5mTKWHy/TxPnpyZ\n2bPPPkn04axZa20hpQQRERERERERkSfp/L0AIiIiIiIiIgo8DDgQERERERERkccx4EBERERERERE\nHseAAxERERERERF5HAMORERERERERORxDDgQERERERERkccx4EBEREREREREHseAAxERERERERF5\nHAMORERERERERORxDDgQERERERERkccx4EBEREREREREHseAAxERERERERF5HAMORERERERERORx\nDDgQERERERERkccx4EBEREREREREHseAAxERERERERF5HAMORERERERERORxDDgQERERERERkccx\n4EBEREREREREHseAAxERERERERF5nM8CDkKI2kKIxUKI40KIbCFEhhDigBBimhAi3EPnqCOEeE0I\nkSiEuCqEyC86zx4hxItCiMqeOA8REREREREROSeklN4/iRB9AHwCIAqA9QkFgJMAekkp/ynBOR4D\n8B6AcDvnMJ3nCoBHpZTbtJ6HiIiIiIiIiFzzesBBCNEMwG4A5QBkA5gPYCeMgYFHATxRNPQkgJZS\nyhsaztEewC4YgwoGACsBbACQCqA2gJEA+hS9fhNAQynlGY2XREREREREREQu+CLgsAtARwAFADpK\nKQ9YvT4VwH9gzEp4WUo5R8M5NgLoVTTHBCnlcjtjFgF4pmjMMinl0+6eh4iIiIiIiIjU8WrAQQjR\nCsAvMN7kvyel/JedMQLAMQAJAK4CqCKl1Lt5ngwAMQDSpZRVHIyJApBZtJZDUspW7pyDiIiIiIiI\niNTzdtPI/mbHK+0NkMaIx6qih9EAumo4TyiMgYTTjgZIKa8DSDcbT0RERERERERe4u2AQ4ei7zcA\nHHQybpfZcXsN5/kTxv4MdR0NEEJEAogzG09EREREREREXuLtgEMCjJkHf0spDU7GnbB6j7veK/oe\nK4QY52DMLLPjdzWcg4iIiIiIiIhUCvbWxEKIMBgzCiSAZGdjpZSZQogbAMoDqKXhdB/CmBkxAsDb\nQogWMO5ScQHGXSqGA3i4aC1zpZQ7NJyDiIiIiIiIiFTyWsABQKTZcbaK8aaAQ4S7JyrKnhhVtFvF\nTABji77MbQcwX0q53d35iYiIiIiIiMg93iypKGd2nK9ifB6MfRjCtZxMCJEAYCSARjBmMlh/3Qtg\nrBCihpb5iYiIiIiIiEg9bwYccs2O1ewKEQZjYCDH3RMJIToC2AugD4zlG8MBVCs6by0A/wJwE8Cj\nAA4UBSeIiIiIiIiIyEu8WVKRZXaspkyiQtF3NeUXCiFEKIA1AKJg7NnQRkp52WxIKoD3hBA/AUgE\nUB3AxwBau3meGygOilxR8RY9AGeNMomIiIiIiKhs0gEIUjGuEoyVAHlSygquBpc2Xgs4SCnzhBDp\nAGIB1HQ2VggRDWPAQQI47+apegCoUfTepVbBBvP1JAkhPoWxt0MLIUQjKeVRN84ThuI/iCpurpGI\niIiIiIhIqzB/L0ALb2Y4AMBxAB0B3CGE0DnZGvNuq/e4w7w84pCLsQdR3EzybgDuBBxk8WFlREcD\nISFARgZgUK7KAJTPAACUCy2H2AqxDidLTQVk0YxhYUBcnBsrIfKS/Px8XL58GZUrV0ZoqJpKKKJb\nD//OqSzg3zmVBfw7p1uZXq+HXq93Oe7yZeXzdOlsXGnl7YDDbhgDDhUAtADwq4Nxnc2O97h5jkKz\nY1fXE+LgfWpcAVAFqAzgEjZvBtq1A+rVA06fLhpR4SLw72oAgAfqP4BvHv3G4WTR0cC1a8bjLl2A\nLVvcXA2RFxw6dAgtWrTAli1b0Lx5c38vh8gr+HdOZQH/zqks4N85lQVVqlQxBR3UlPWXOt5sGgkA\n682OR9kbIIQQAEYUPcwEsMPNc5w2O+7oYqx5YOO0w1EqmIJR0jzOZCiOZxQa1Mcz5C0ZqyIiIiIi\nIiJyzKsBBynlrwB+hrHJxRghRBs7w6bBWBYhAbwppbTIKxFCdBZCGIq+PrTz/h9h3IFCAHhKCNHQ\n3lqEEA8BeLjoYYqU8rCmiypiKqOwDDgUJ1i4CjiYv48BByIiIiIiIgo03i6pAIDJMJZJhAPYKoSY\nD2MWQziAIQCeKBr3J4DXncxj97ZcSnlNCPEagDkw7lSxVwixFMBWAFcBVAXQH8beDbqieZ4r4TWZ\n9W0wf7L4x1mgL1A9FwMOREREREREFGi8HnCQUh4WQgwG8CmMAYH51kNgDDb0klLe0HiOuUKIGBiD\nGxUAPF/0ZX2efADPSynXaDmPObsZDnqWVBAREREREREB3u/hAACQUn4LoDGAN2AMLtyAMfvgVwDP\nAmgupXTWU0Fafbd3jqkAWgF4D8bdJ67D2BgyE0AijNkTDaWUb5ToYorY7+FgluFgYIYDERERERER\nlV2+KKkAAEgpz8PYr2Gam+/bBSBI5djfAPzL/dW5z25JBQRgCAJ0evZwICIiIiIiojLNJxkOgchu\nSQWgZDmwpIKIiIiIiIjKMgYcNLJbUgEoAQdXTSOFABCZCoRnOMiWICIiIiIiIrp1+aykIgAUhRiM\n1R0OgwRFjSNdZTjoKx8GhrUE9KHI3f83gBoeWiaRdtWrV8fs2bNRvXp1fy+FyGv4d05lAf/OqSzg\n3zmVBUFBSncBvT/XoZWQzOdXRQiRDCAeiAeQjHXrgP79gfh4IDXVbOC/KwMV0lEvph7+efofh/MF\nTboHhrgkAEDVlDFI++8Kr66fiIiIiIiIbi01a9ZESkoKAKRIKWv6ez3uYkmFRo57OKjLcJDh6cqx\nXpfjyaURERERERER+R0DDhrpHSW0qGwaKXXFPR6EZGULERERERERBRYGHDSyznAQwvSCuqaRCDIL\nOBRlRRAREREREREFCn60rpF1wEGnK8p6UNk0EsLsdQN/DUREROSeli1bIi0tzd/LICIiFapVq4bE\nxER/L8PneKerkXVJhU2Gg4EZDkREROQ9aWlppkZiREREpRIDDho5LqlQmeGgM4tYsIcDERERaaTT\n6bgtIBFRKXXhwgUYTDePZRDvdDWyV1JhfEFd00hzzHAgIiIirapXr47k5GR/L4OIiOww29ayTGLT\nSI2sg1TWJRWFhkJImz0zHU3GuA8REREREREFFgYcNDL1cLApqdAXZyusProazZY3w2dHP3M+GTMc\niIiIiIiIKMAw4KCRq5IKAJi5fSYOpx3GC9tfcDoXSyqIiIiIiIgo0DDgoJHjkori4MHF7IsAgEs3\nLrmYjCUVREREREREFFgYcNDIYUmFWfAgT58HALhRcAN6g9U+mmZUdnogIiIiIiIiumUw4KCRmpIK\nc9n52c5m89i6SqtCQyEOph50GnghIiIiIiKiwMGAg0YOSyr09vsxZOVnOZ4LgX8T/ujXj6Ll+y3x\n1LdP+XspRERERERE5AMMOGhkneFgr6TC3PW86w7nkiLwAw7bT2+3+E5ERERERESBjQEHjRz3cHCQ\n4ZBXnOEgbZo2BH5JhV4af2AGGfjXSkRERERERAw4aOZ4lwrXGQ7W7y0LJRWmQAMDDkRERERERGUD\nAw4auds00ryHg3WGgywDAQdTs0hTpgMRERERBZYlS5ZAp9NBp9PhmWee8dl5r127ppy3UqVKPjtv\nadC0aVPl2o8cOeLv5RDZYMBBI4c9HBw0jbTMcLCMOJSFHg4sqSAiIqKy7OzZs8qNoae+5syZ4+/L\nskso/zAuG+f1J9M1l8Vrp1uD/Y/jySW9VYzAnZKKfH2B1auBfxNuCjRwW0wiIiIqy3hj6D3StlEa\nEfkZAw4aOS6pcN00Mic
7 years ago
"text/plain": [
7 years ago
"<matplotlib.figure.Figure at 0x7fa490425780>"
7 years ago
]
},
"metadata": {
"image/png": {
"height": 356,
"width": 526
}
},
"output_type": "display_data"
}
],
"source": [
"plt.plot(losses['train'], label='Training loss')\n",
"plt.plot(losses['validation'], label='Validation loss')\n",
"plt.legend()\n",
"plt.ylim(ymax=0.8)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Check out your predictions\n",
"\n",
"Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly."
]
},
{
"cell_type": "code",
7 years ago
"execution_count": 30,
7 years ago
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
7 years ago
"image/png": "iVBORw0KGgoAAAANSUhEUgAABYQAAAMGCAYAAABCg4CrAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAewgAAHsIBbtB1PgAAIABJREFUeJzs3XecVNX9//H32V2WtoAKKAiKWCJojAqIBVQkVtQEEzUx\nUURi+xoL0VhCiFgCFtQoij9bosSYWL5REkswsQRLMFL0awEERZGyIB2Wsm3O7487O3vnzuzOlnvn\nzsx9PR+PfeydnTtnztldZpn3fOZzjLVWAAAAAAAAAIDCVxT2BAAAAAAAAAAA2UEgDAAAAAAAAAAR\nQSAMAAAAAAAAABFBIAwAAAAAAAAAEUEgDAAAAAAAAAARQSAMAAAAAAAAABFBIAwAAAAAAAAAEUEg\nDAAAAAAAAAARQSAMAAAAAAAAABFBIAwAAAAAAAAAEUEgDAAAAAAAAAARQSAMAAAAAAAAABFBIAwA\nAAAAAAAAEUEgDAAAAAAAAAARQSAMAAAAAAAAABFBIAwAAAAAAAAAEUEgDAAAAAAAAAARQSAMAAAA\nAAAAABFBIAwAAAAAAAAAEUEgDAAAAAAAAAARkfeBsDGmjTHmQmPMDGPMSmPMDmPMFmPMQmPMH4wx\nRzZxnFOMMc8bY5bFx1gWv3xyM+ZSbIy51BjzljHmG2PMNmPM58aYh4wxB7R8lQAAAAAAAADQesZa\nG/YcWswYs6ekVyTVha3exZj45ynW2rENjGEkPSppTJox6m7/qLX2kgxz6SrpH5IGNTCPSkmXW2t/\n39g4AAAAAAAAABCUvK0QNsaUqD4MtpI+lDRa0pGSTpR0i6SK+HVXGGOua2CoSXLCYCtprqRzJA2O\nf54X//qFxpjfNjKXIknTVR8G/1XSKZIOl3SlpNWS2kp6yBhzUkvXDAAAAAAAAACtkbcVwsaYH0p6\nTk4A+x9Jx1jPYowxAyTNklQiaaOkXa21ta7r95P0qaRiSbMlHWutrXRd317STDlBb7WkA6y1X6SZ\nyxhJj8XnMtVae6Xn+n3khM2dJH0uqb+1NtaqbwAAAAAAAAAANFPeVghLOsp1fLs3DJYka+08SS/J\nadmwk6R+nlN+IScslqQr3GFw/PbbJV0Rv1giKW3bCUnXxD9vkJRSiRwPkW+Lz2NfSWc0MA4AAAAA\nAAAABCafA+FS1/GSRs5zV/SWeq77npyq3oXW2tnpbmyt/a+kz+SEud/3Xh+vMu4fH+cZa+2OBubx\nhOuYQBgAAAAAAABA1uVzIPyZ63jvRs7bJ/7ZSlpc90VjTF9Ju8cvzsxwX3XX9zLG9PFcNzTNeSms\ntaslLZITLA/JcH8AAAAAAAAA4Lt8DoT/ImmTnID1+vjGbkmMMYdKOlVOGPyUtbbCdfUBruOFGe7L\nfX1/z3UtGWePeH9iAAAAAAAAAMiavA2ErbXrJI2StFVOxe1sY8x5xpjDjTHfNcZMkPRvSW3kbOj2\nS88QvV3HyzPc3TLX8R4+jGM8twMAAAAAAACAwJVkPiV3WWtfNMYMlBP2/kzSNM8pqySNl/Romt6+\nnVzHFWrcVtdxWUDjAAAAAAAAAECg8rZCWJKMMW0kjVb9Zm/W87GbpPMknZDm5u1cx1UZ7qrSdext\n9ZAYx1rbmnEAAAAAAAAAIFB5GwgbYzpIel3SDZJ2lnSHnP6+bSV1kXSipHckDZI03Rgz1jOEu2K4\nNMPdtXUdb29oHGNMa8YBAAAAAAAAgEDlc8uImyUNlVMJPMZa+yfXdRWSXjfGvCnpX5KOkzTZGPO6\ntfbj+DlbXOdnat/Q0TO2m3ec9S0cp1HGmK1yAmWb4T7q1EqKNec+AAAAAAAAgDxUJKm4CeftImdv\nr0prbcdMJxeqfA6EL5ATji7yhMEJ1tqYMeY3ciqFi+S0l7gmfrV7A7hMG7y5N5Jb5rnOO05jYW3d\nOFaZN6Dzaqv6X+xdm3lbAAAAAAAAAI62mU8pXHkZCBtjdpOT6FtJH2Q4fa7ruJ/reH4DX0/Hff0C\nz3XecT5qwjjLrLXNbRkRk1RsjFG3bt0ynlxcXKzi4qa8MJJZVVWV1qxZo+7du6u0NFNXjPwXtfVK\n0Vtz1NYrRW/NUVuvFL01R229UvTWHLX1StFbc9TWK0VvzVFbrxS9NUdtvVL01hy19UrRW3OhrLe2\ntla1tbUZz1uzZk3doQ10QrnOWpt3H5K6yglIayU9m+HcMte5f/Nctzz+9U8zjDE/PsbXaa7bzzX+\n1EbG2M113p9asOblkmyvXr1sts2dO9dKsnPnzs36fYchauu1Nnprjtp6rY3emqO2Xmujt+aordfa\n6K05auu1Nnprjtp6rY3emqO2Xmujt+aordfa6K05auu1Nnprjtp6u3fvbuWEwattDmScYX3k66Zy\n6yVtjh8faYxpbB3DXMdfeq77m5y+If2MMYPT3dgYc4Scyl4rabr3emvtYjlVw0bS2caYdg3M4wLX\n8QuNzBcAAAAAAAAAApGXgbC11kp6WU4Iu7ukX6c7zxizs6TbXV96yXPKvZJq4sf3e8Pc+OUp8Ys1\nku5rYEp3xT/vIunONPPYR9IN8Yufi0AYAAAAAAAAQAjyMhCOu0XSNjmh8M3GmL8ZY35gjDnEGHOE\nMeYXcvoLHyCnuvc1a+1r7gHi1b13xcc4TNK7xpizjTEDjTFnS3pX0qD47e+01n7RwFymxc81ki43\nxjxnjDnRGHOYMeby+HWd5bSLuMJaG/P1OwEAAAAAAAAATZCXm8pJkrX2M2PM9yT9RVI3SafHP5JO\ni3+8LunsBob6taTuksZIOkTS02lu/5i19jeNzCVmjBkpp2r5MEk/jH+4x6mU9HNr7T+btEAAAAAA\nAAAA8Fk+VwjLWvuGnP6+10t6U9I3kqrkVA4vkfSspJHW2hOttZsaGMNaay+SdKqcnsIr5IS3K+KX\nT7HWXtKEuayTdJSkyyS9LWmtpO2SvpD0iKQB1to/tHy1AAAAAAAAANA6eVshXMdau0FO24e7Mp2b\nYZwZkma0coyYpIfjHwAAAAAAAACQU/K6QhgAAAAAAAAA0HQEwgAAAAAAAAAQEcU33XRT2HNAE9x8\n881XS+rcuXNnXX311Vm//7KyMg0bNkydOnXK+n2HIWrrlaK35qitV4remqO2Xil6a47aeqXorTlq\n65Wit+aorVeK3pqjtl4pemuO2nql6K05auuVorfmKK33nnvuUUVFhSRtvummm+4Oez5hMdbasOeA\nJjDGLJfUq1evXlq+fHnY0wEAAAAAAADySu/evbVixQpJWmGt7R32fMJCywgAAAAAAAAAiAgCYQAA\nAAAAAACIiJKwJwAAAAAAAPLLoEGDtGrVqrCnASCievTooTlz5oQ9jbxFIAwAAAAAAJpl1apVdX04\nAQB5hkAYAAAAAAC0SFFRkXr27Bn2NABERHl5uWKxWNjTyHsEwgAAAAAAoEV69uyp5cuXhz0NABHR\nu3dv3p3gAzaVAwAAAAAAAICIIBAGAAAAAAAAgIggEAYAAAAAAACAiCAQBgAAAAAAAICIIBAGAAAA\nAAAAgIggEAYAAAAAAACAiCAQBgAAAAAAAICIIBAGAAAAAAAAgIggEAYAAAAAAACAiCAQBgAAAAAA\nAICIIBAGAAAAAAAAgIggEAYAAAAAAEBkTJs2TUVFRSoqKtKYMWPSnjNz5szEOcOHD8/yDJvmggsu\nSMzxj3/8Y9jTQR4hEAYAAAAAAIiIYcOGJULEdB+dO3dW3759NXLkSD3wwAPavHlz2FMOjDHGl3PC\nlg9zRG4hEAYAAAAAAIgIY0yjH1u3btXSpUv197//XVdeeaX23HNPPfnkk2FPOzTW2qzcD9W+yKaS\nsCcAAAAAAACA7LHWyhijww47TIMHD076+saNGzV79mwtXrxYkrR582adf/752rFjhy666KKwphyK\nusrbbFbgNve+6oJ8oDkIhAEAAAAAACJoxIgRuvHGG9Ne97e//U0XXHCBNm3aJGutrrrqKp166qna\nfffdszzLcBx77LGqra0
7 years ago
"text/plain": [
7 years ago
"<matplotlib.figure.Figure at 0x7fa4903d63c8>"
7 years ago
]
},
"metadata": {
"image/png": {
"height": 387,
"width": 706
}
},
"output_type": "display_data"
}
],
"source": [
"fig, ax = plt.subplots(figsize=(8,4))\n",
"\n",
"mean, std = scaled_features['cnt']\n",
"predictions = network.run(test_features)*std + mean\n",
"ax.plot(predictions[0], label='Prediction')\n",
"ax.plot((test_targets['cnt']*std + mean).values, label='Data')\n",
"ax.set_xlim(right=len(predictions))\n",
"ax.legend()\n",
"\n",
"dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])\n",
"dates = dates.apply(lambda d: d.strftime('%b %d'))\n",
"ax.set_xticks(np.arange(len(dates))[12::24])\n",
"_ = ax.set_xticklabels(dates[12::24], rotation=45)\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Thinking about your results\n",
" \n",
"Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?\n",
"\n",
"> **Note:** You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter\n",
"\n",
"#### Your answer below\n",
"- The model predicts the data good enough.\n",
"\n",
"- The model fails for the period of christmas holidays. \n",
"\n",
"- **Reason:** We can expect users to change behavior around christmas time (family visit, holidays ..). We also have much less data for christmas holidays which occures only once a year then the rest of the year. So the model is over fitted for normal days vs christmas holidays. We would need data over several years to fit the model for these seasonal changes."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Unit tests\n",
"\n",
"Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project."
]
},
{
"cell_type": "code",
"execution_count": 160,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
".....\n",
"----------------------------------------------------------------------\n",
"Ran 5 tests in 0.004s\n",
"\n",
"OK\n"
]
},
{
"data": {
"text/plain": [
"<unittest.runner.TextTestResult run=5 errors=0 failures=0>"
]
},
"execution_count": 160,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import unittest\n",
"\n",
"inputs = [0.5, -0.2, 0.1]\n",
"targets = [0.4]\n",
"test_w_i_h = np.array([[0.1, 0.4, -0.3], \n",
" [-0.2, 0.5, 0.2]])\n",
"test_w_h_o = np.array([[0.3, -0.1]])\n",
"\n",
"class TestMethods(unittest.TestCase):\n",
" \n",
" ##########\n",
" # Unit tests for data loading\n",
" ##########\n",
" \n",
" def test_data_path(self):\n",
" # Test that file path to dataset has been unaltered\n",
" self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')\n",
" \n",
" def test_data_loaded(self):\n",
" # Test that data frame loaded\n",
" self.assertTrue(isinstance(rides, pd.DataFrame))\n",
" \n",
" ##########\n",
" # Unit tests for network functionality\n",
" ##########\n",
"\n",
" def test_activation(self):\n",
" network = NeuralNetwork(3, 2, 1, 0.5)\n",
" # Test that the activation function is a sigmoid\n",
" self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))\n",
"\n",
" def test_train(self):\n",
" # Test that weights are updated correctly on training\n",
" network = NeuralNetwork(3, 2, 1, 0.5)\n",
" network.weights_input_to_hidden = test_w_i_h.copy()\n",
" network.weights_hidden_to_output = test_w_h_o.copy()\n",
" \n",
" network.train(inputs, targets)\n",
" self.assertTrue(np.allclose(network.weights_hidden_to_output, \n",
" np.array([[ 0.37275328, -0.03172939]])))\n",
" self.assertTrue(np.allclose(network.weights_input_to_hidden,\n",
" np.array([[ 0.10562014, 0.39775194, -0.29887597],\n",
" [-0.20185996, 0.50074398, 0.19962801]])))\n",
"\n",
" def test_run(self):\n",
" # Test correctness of run method\n",
" network = NeuralNetwork(3, 2, 1, 0.5)\n",
" network.weights_input_to_hidden = test_w_i_h.copy()\n",
" network.weights_hidden_to_output = test_w_h_o.copy()\n",
"\n",
" self.assertTrue(np.allclose(network.run(inputs), 0.09998924))\n",
"\n",
"suite = unittest.TestLoader().loadTestsFromModule(TestMethods())\n",
"unittest.TextTestRunner().run(suite)"
]
}
],
"metadata": {
"anaconda-cloud": {},
"hide_input": false,
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}