Skip to content

External link fix #91

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
May 11, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion src/applications/classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ Here, we see that an increase in the `decile_score` still leads to an increase i
the predicted probability of recidivism, while older individuals are slightly
less likely to commit crime again.

We'll build on an example from the [scikit-learn documentation](https://scikit-learn.org/stable/auto_examples/svm/plot_iris.html) to visualize the predictions of this model.
We'll build on an example from the [scikit-learn documentation](https://scikit-learn.org/stable/auto_examples/svm/plot_iris_svc.html) to visualize the predictions of this model.

```{code-cell} python
def plot_contours(ax, mod, xx, yy, **params):
Expand Down
2 changes: 1 addition & 1 deletion src/applications/maps.md
Original file line number Diff line number Diff line change
Expand Up @@ -301,7 +301,7 @@ Along the way, we will learn a couple of valuable lessons:
### Find and Plot State Border

Our first step will be to find the border for the state of interest. This can be found on the [US
Census's website here](https://www.census.gov/geo/maps-data/data/cbf/cbf_state.html).
Census's website here](https://www.census.gov/geographies/mapping-files/time-series/geo/carto-boundary-file.html).

You can download the `cb_2016_us_state_5m.zip` by hand, or simply allow `geopandas` to extract
the relevant information from the zip file online.
Expand Down
4 changes: 2 additions & 2 deletions src/pandas/groupby.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,8 @@ kernelspec:

- Details for all delayed US domestic flights in December 2016,
obtained from the [Bureau of Transportation
Statistics](https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236&DB_Short_Name=On-Time)

Statistics](https://www.transtats.bts.gov/OT_Delay/OT_DelayCause1.asp)
```{literalinclude} ../_static/colab_light.raw
```

Expand Down
4 changes: 2 additions & 2 deletions src/pandas/merge.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ kernelspec:
[Goodreads](https://www.goodreads.com/)
- Details for all delayed US domestic flights in November 2016,
obtained from the [Bureau of Transportation
Statistics](https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236&DB_Short_Name=On-Time)
Statistics](https://www.transtats.bts.gov/OT_Delay/OT_DelayCause1.asp)


```{literalinclude} ../_static/colab_light.raw
Expand Down Expand Up @@ -629,7 +629,7 @@ It looks like most books have an average rating of just below 4.
Let's look at one more example.

This time, we will use a dataset from the [Bureau of Transportation
Statistics](https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236&DB_Short_Name=On-Time)
Statistics](https://www.transtats.bts.gov/OT_Delay/OT_DelayCause1.asp)
that describes the cause of all US domestic flight delays
in November 2016:

Expand Down
2 changes: 1 addition & 1 deletion src/problem_sets/problem_set_6.md
Original file line number Diff line number Diff line change
Expand Up @@ -241,7 +241,7 @@ Good luck!
Let's look at another example.

This time, we will use a dataset from the [Bureau of Transportation
Statistics](https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236&DB_Short_Name=On-Time)
Statistics](https://www.transtats.bts.gov/OT_Delay/OT_DelayCause1.asp)
that describes the cause for all US domestic flight delays in November 2016:

Loading this dataset the first time will take a minute or two because it is quite hefty... We recommend taking a break to view this [xkcd comic](https://xkcd.com/303/).
Expand Down
2 changes: 1 addition & 1 deletion src/problem_sets/problem_set_7.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ for (i, year) in enumerate(df.year.unique()):
## Questions 3-5

These question uses a dataset from the [Bureau of Transportation
Statistics](https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236&DB_Short_Name=On-Time)
Statistics](https://www.transtats.bts.gov/OT_Delay/OT_DelayCause1.asp)
that describes the cause for all US domestic flight delays
in November 2016. We used the same data in the previous problem set.

Expand Down
2 changes: 1 addition & 1 deletion src/python_fundamentals/collections.md
Original file line number Diff line number Diff line change
Expand Up @@ -874,7 +874,7 @@ Here are some tickers and a price.

### Exercise 6

Look at the [World Factbook for Australia](https://www.cia.gov/-library/publications/the-world-factbook/geos/as.html)
Look at the [World Factbook for Australia](https://www.cia.gov/the-world-factbook/countries/australia)
and create a dictionary with data containing the following types:
float, string, integer, list, and dict. Choose any data you wish.

Expand Down
6 changes: 3 additions & 3 deletions src/scientific/applied_linalg.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ In numpy terms, a vector is a 1-dimensional array.

We often think of 2-element vectors as directional lines in the XY axes.

This image, from the [QuantEcon Python lecture](https://lectures.quantecon.org/py/linear_algebra.html#)
This image, from the [QuantEcon Python lecture](https://python.quantecon.org/linear_algebra.html)
is an example of what this might look like for the vectors `(-4, 3.5)`, `(-3, 3)`, and `(2, 4)`.

```{figure} https://datascience.quantecon.org/assets/_static/applied_linalg_files/vector.png
Expand Down Expand Up @@ -358,7 +358,7 @@ Computing the inverse requires that a matrix be square and satisfy some other co

We also skip the exact details of how this inverse is computed, but, if you are interested,
you can visit the
[QuantEcon Linear Algebra lecture](https://lectures.quantecon.org/py/linear_algebra.html)
[QuantEcon Linear Algebra lecture](https://python.quantecon.org/linear_algebra.html)
for more details.

We demonstrate how to compute the inverse with numpy below.
Expand Down Expand Up @@ -621,7 +621,7 @@ print(NPV_mf)
Note: While our matrix above was very simple, this approach works for much more
complicated `A` matrices as long as we can write $x_t$ using $A$ and $x_0$ as
$x_t = A^t x_0$ (For an advanced description of this topic, adding randomness, read about
linear state-space models with Python <https://lectures.quantecon.org/py/linear_models.html>).
linear state-space models with Python <https://python.quantecon.org/linear_models.html>).

### Unemployment Dynamics

Expand Down
2 changes: 1 addition & 1 deletion src/scientific/randomness.md
Original file line number Diff line number Diff line change
Expand Up @@ -273,7 +273,7 @@ In general, numpy code that is *vectorized* will perform better than numpy code
element at a time.

For more information see the
[QuantEcon lecture on performance Python](https://lectures.quantecon.org/py/numba.html) code.
[QuantEcon lecture on performance Python](https://python-programming.quantecon.org/numba.html) code.

#### Profitability Threshold

Expand Down