Skip to content

Commit

Permalink
Merge pull request #59 from themarshallproject/0026-random-carousel
Browse files Browse the repository at this point in the history
Random carousel review, or ... randousel?!
  • Loading branch information
Weihua4455 authored Sep 7, 2022
2 parents 1882805 + a953be2 commit 3a900bb
Show file tree
Hide file tree
Showing 68 changed files with 8,603 additions and 6,185 deletions.
38 changes: 7 additions & 31 deletions .github/dependabot.yml
Original file line number Diff line number Diff line change
@@ -1,33 +1,9 @@
version: 2
updates:
- package-ecosystem: npm
directory: /
schedule:
interval: daily
time: 10:00
open-pull-requests-limit: 10
ignore:
- dependency-name: "@octokit/rest"
versions:
- 18.0.15
- 18.1.0
- 18.1.1
- 18.2.0
- 18.2.1
- 18.3.0
- 18.3.1
- 18.3.2
- 18.3.3
- 18.3.4
- 18.3.5
- 18.4.0
- 18.5.0
- 18.5.2
- dependency-name: gulp-markdown
versions:
- 5.1.0
- dependency-name: elliptic
versions:
- 6.5.3
allow:
- dependency-type: production
- package-ecosystem: npm
directory: "/"
schedule:
interval: weekly
time: "10:00"
timezone: "America/New_York"
open-pull-requests-limit: 10
8 changes: 7 additions & 1 deletion .idea/workspace.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

20 changes: 20 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,8 @@ merge_old_and_new_data: analysis/output_data/q1_data_with_303_vetted_info.csv
help: ## Display this help
@awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m<target>\033[0m\n"} /^[a-zA-Z0-9\%\\.\/_-]+:.*?##/ { printf "\033[36m%-15s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST)

.PHONY: graphics_data
graphics_data: src/assets/data/arpa_wtfs.json ## Download and parse data used for carousel


##@ Analysis
Expand All @@ -43,6 +45,20 @@ analysis/source_data/April-2022-Quarterly-and-Annual-Reporting-Data-through-Marc
@echo "Downloading source data"
curl https://s3.amazonaws.com/tmp-gfx-public-data/arpa_ncsl20220125/April-2022-Quarterly-and-Annual-Reporting-Data-through-March-31-2022.xlsx -o $@


analysis/output_data/arpa_wtfs.json: ## Pull hand-curated WTF examples from Airtable
curl 'https://api.baseql.com/airtable/graphql/appjNDCHFduMTVuRA?' \
-H 'accept: application/json' \
-H 'cache-control: no-cache' \
-H 'content-type: application/json' \
-H 'pragma: no-cache' \
--data-raw '{"query":"query MyQuery {\n requests {\n\t\tplace\n state\n description\n obligations\n projectName\n obligations\n budget\n category\n wtf\n rank\n }\n}","variables":null,"operationName":"MyQuery"}' \
--compressed \
-o $@

src/assets/data/arpa_wtfs.json: analysis/output_data/arpa_wtfs.json ## move wtf data from source folder to graphics data folder
$(PYENV) python analysis/filter_by_wtf.py $< $@

##@ Upload/sync

.PHONY: deploy
Expand All @@ -57,3 +73,7 @@ clean/source_data: ## Clean source data
.PHONY: clean/output_data
clean/output_data: ## Clean processed data
rm -rf analysis/output_data/*

.PHONY: clean/graphics_data
clean/graphics_data: ## Clean graphics data
rm -rf src/assets/data/*
37 changes: 21 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,10 +43,10 @@ any custom data processing this project may handle.
+ [Custom headers](#custom-headers)
- [Using Endrun metadata to develop custom headers](#using-endrun-metadata-to-develop-custom-headers)
+ [Customizing the layout of graphics locally](#customizing-the-layout-of-graphics-locally)
- [Using external data sources in your HTML](#using-external-data-sources-in-your-html)
- [Using external data sources in your HTML and JavaScript](#using-external-data-sources-in-your-html-and-javascript)
+ [Example: basic table](#example-basic-table)
+ [Example: writing to JavaScript variable](#example-writing-to-javascript-variable)
+ [CSV data formats](#csv-data-formats)
+ [Accessing the data from JavaScript](#accessing-the-data-from-javascript)
- [Using pre-configured templates](#using-pre-configured-templates)
* [Chart templates](#chart-templates)
* [ai2html template](#ai2html-template)
Expand Down Expand Up @@ -195,7 +195,7 @@ graphics, and then deploy and put them in a post. Then copy the produced post's
contents back down to `localtext.md`, so that your graphics environment more
closely resembles the real post.

## Using external data sources in your HTML
## Using external data sources in your HTML and JavaScript

You may want to use a data file such as a CSV or JSON to populate your HTML.
The graphics rig makes any CSV or JSON files placed in `src/template-files`
Expand Down Expand Up @@ -274,19 +274,6 @@ The result should appear like this:
</tr>
</tbody></table>

#### Example: writing to JavaScript variable

This templating also allows us make data accessible to the JavaScript by
writing it directly to the page as a global variable. Here's an example:

```
<script type="text/javascript">var statesData = {{ data.states|dump|safe }};</script>
```

We use two Nunjucks filters, [dump](https://mozilla.github.io/nunjucks/templating.html#dump)
and [safe](https://mozilla.github.io/nunjucks/templating.html#safe) to
print the data as JSON and prevent the template engine from escaping the values.

#### CSV data formats

CSV files added to `src/template-files` are converted to JSON before
Expand Down Expand Up @@ -383,6 +370,24 @@ would return a JSON like this:
}
```

#### Accessing the data from JavaScript

The above methods explain how to access data in `src/template-files`
from your HTML. If you need to access that same data file from your JavaScript, you can find it in JSON format in `assets/import-data`.

For example, the file `src/template-files/sports.csv` will be converted
to JSON and written to `src/assets/import-data/sports.json`. You can
load it into your JavaScript however you like from there. For example:

```
fetch('assets/import-data/sports.json')
.then((response) => response.json())
.then((data) => {
// Whatever you want from here
})
```


## Using pre-configured templates

The `templates` directory houses frequently used graphic formats and
Expand Down
23 changes: 23 additions & 0 deletions analysis/demo_output.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
import click
import pandas as pd

@click.command()
@click.argument('dep1')
@click.argument('dep2')
@click.argument('outputfilename')
def process(dep1, dep2, outputfilename):
print(dep1)
print(dep2)
df1 = pd.read_csv(dep1)
df2 = pd.read_csv(dep2)

output_df = df1.join(df2)
# .... other processing steps
output_df.write_csv(outputfilename)


output_df.write_csv("/big/long/hardcoded_path.csv")


if __name__ == '__main__':
process()
5 changes: 5 additions & 0 deletions analysis/demo_output_dep1.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
def process():
print('hello weihua')

if __name__ == '__main__':
process()
5 changes: 5 additions & 0 deletions analysis/demo_output_dep2.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
def process():
print('hello ana')

if __name__ == '__main__':
process()
28 changes: 28 additions & 0 deletions analysis/filter_by_wtf.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
import pandas as pd
import click
import json

@click.command()
@click.argument('input_filename')
@click.argument('output_filename')
def process(input_filename, output_filename):
df = read_csv(input_filename)
filtered_data = filter_data(df)
export(filtered_data, output_filename)

def read_csv(input_filename):
f = open(input_filename)
data = json.load(f)['data']['requests']
df = pd.DataFrame(data)
return df

def filter_data(df):
filtered_data = df[df['wtf'] == True]
return filtered_data

def export(filtered_data, output_filename):
filtered_data.to_json(output_filename, orient='records')
print("export filter graphics data")

if __name__ == '__main__':
process()
Binary file not shown.
3 changes: 0 additions & 3 deletions analysis/output_data/SLFRF-Recipient-all-Tiers.csv

This file was deleted.

3 changes: 0 additions & 3 deletions analysis/output_data/SLFRF-Recipients-Tier1.csv

This file was deleted.

3 changes: 0 additions & 3 deletions analysis/output_data/alabama.csv

This file was deleted.

3 changes: 0 additions & 3 deletions analysis/output_data/arpa_le.csv

This file was deleted.

1 change: 1 addition & 0 deletions analysis/output_data/arpa_wtfs.json

Large diffs are not rendered by default.

3 changes: 0 additions & 3 deletions analysis/output_data/cj_related_projects_to_vet.csv

This file was deleted.

3 changes: 0 additions & 3 deletions analysis/output_data/comments.csv

This file was deleted.

3 changes: 0 additions & 3 deletions analysis/output_data/group_by_category.csv

This file was deleted.

3 changes: 0 additions & 3 deletions analysis/output_data/output.csv

This file was deleted.

3 changes: 0 additions & 3 deletions analysis/output_data/parsed_pdf_text.csv

This file was deleted.

Binary file not shown.
Binary file not shown.
Loading

0 comments on commit 3a900bb

Please sign in to comment.