Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hacky fixes to archive likes #114

Open
wants to merge 12 commits into
base: master
Choose a base branch
from

Conversation

aspensmonster
Copy link

@aspensmonster aspensmonster commented Dec 4, 2018

This doesn't actually have to be merged. It's more for anyone else who's looking to archive their years worth of likes before Tumblr rm -rfs them into oblivion.

I intended to make it cleaner, but given Tumblr's two week deadline before they delete all NSFW content --and I'm sure plenty of other content that's going to get swept up in this accidentally-- I figured someone might find it useful in its present state.

The code is updated to archive all likes, with a ten second pause between API calls to try and avoid hitting API quotas. The previous code would only get the 1000 most recent likes.

Potential remaining problems:

  • I don't think I've gotten incremental backup working with this yet (can't just rely on latest timestamp like you can for a blog's posts, since a user could conceivably run around "liking" older posts; I think that's what stopped me from trying to tackle this before, since I'd have to iterate over the whole collection anyway to make sure nothing was missed).
  • The rendered HTML shows the chronological order of the content's posting, not the time when the user actually "liked" the post.
  • No way to view likes by tag (though it looks like this is an open feature request)

I was going to get around to those problems "eventually", but my philosophy is that I can also download the json payloads and sort it all out... "eventually".

I've been running this script like so:

./tumblr_backup.py --likes --outdir=/home/some/path/tumblr_backup_likes/ --save-video --save-audio -j --exif='' some_blog_name

Edit: Be sure that the outdir is not where you have your own blog's posts backed up. You'll overwrite the html code. As far as this script is concerned, a backup of a blog's posts, and a backup of a blog's likes, both render to the same HTML layout. So if you want to keep both, you need to separate the two in different paths.

I've saved 70 GiB worth of historical likes this way. Not everything is saved. Some posts caused youtube-dl to choke out (an annoying problem that never goes away given the frenetic update cadence that most video sites seem to adhere to). Some videos that are downloaded don't seem to want to play. And of course some of the liked content has been deleted over the years. But 70 GiB of mostly-good salvage of 30k+ likes is better than 0.

Obviously the likes have to be public, and if you want to gather everything --including stuff that has been NSFW tagged? Or something else? All I know is my original run didn't grab everything-- then the blog will need to, for the time of running the script at least, be marked as containing sensitive content. Or something. I don't remember all of the specifics. Just that a second pass after doing that yielded more results.

Also, I'd advise setting up your own app and using your own API key. Likes can only be iterated over 20 at a time, and if you have tens of thousands of likes, then you could conceivably go over hourly/daily limits. I don't know what quotas the public API key that the script is using has (maybe it's not rate limited?), but it'd probably be best to be a good neighbor and get your own API key to use with these tweaks.

@vl09
Copy link

vl09 commented Dec 4, 2018

You're a legend!!! Just saved a few thousand posts and likes before everything goes down. You only forgot to define MAX_LIKES there but it works perfectly fine otherwise! Thank you!!

@Hrxn
Copy link

Hrxn commented Dec 4, 2018

Woah, wait a sec..

I've heard about the issues Tumblr recently had with its App on Apple's Walled Garden (App Store), but this is new to me:

[..] but given Tumblr's two week deadline before they delete all NSFW content [..]

Is this official?

@aspensmonster
Copy link
Author

You only forgot to define MAX_LIKES there but it works perfectly fine otherwise! Thank you!!

Because of course I did. Didn't pay quite enough attention when git add -ping the change (didn't want to spill my app api key accidentally). It's there now.

@aspensmonster
Copy link
Author

aspensmonster commented Dec 4, 2018

@Hrxn

From this link:

https://staff.tumblr.com/post/180758987165/a-better-more-positive-tumblr

So what’s next?

Starting December 17, 2018, we will begin enforcing this new policy. Community members with content that is no longer permitted on Tumblr will get a heads up from us in advance and steps they can take to appeal or preserve their content outside the community if they so choose. All changes won’t happen overnight as something of this complexity takes time.

Another thing, filtering this type of content versus say, a political protest with nudity or the statue of David, is not simple at scale. We’re relying on automated tools to identify adult content and humans to help train and keep our systems in check. We know there will be mistakes, but we’ve done our best to create and enforce a policy that acknowledges the breadth of expression we see in the community.

I'm reading "steps they can take to appeal or preserve their content outside the community" to mean that, eventually --perhaps not on the 17th two weeks from now, but eventually-- the content will be deleted. And yes, this kind of filtering "is not simple at scale." I'd argue it's not possible at scale. There's already plenty that's getting mis-flagged.

@Hrxn
Copy link

Hrxn commented Dec 4, 2018

Damn. Thanks. Agreed, I read it in the same way. A real shame.

@Laydmei
Copy link

Laydmei commented Dec 5, 2018

A little question from a beginner:
I tried the command "./tumblr_backup.py --likes --outdir=/home/some/path/tumblr_backup_likes/ --save-video --save-audio -j --exif=' some_blog_name'" but it does not work. How can I download all my likes? Because I can not download more than 1000 likes. ?_?

@vl09
Copy link

vl09 commented Dec 5, 2018

A little question from a beginner:
I tried the command "./tumblr_backup.py --likes --outdir=/home/some/path/tumblr_backup_likes/ --save-video --save-audio -j --exif=' some_blog_name'" but it does not work. How can I download all my likes? Because I can not download more than 1000 likes. ?_?

Be sure to make your blog explicit first in the settings. Somehow, the code works better that way.

@cebtenzzre
Copy link
Collaborator

It "works better" because AFAIK a non-explicit blog will not publicly show explicit likes.

@Laydmei
Copy link

Laydmei commented Dec 5, 2018

I have checked. I tried again. But it still does not download all my Likes. Maybe I have too many Likes? I really do not understand.

@Doty1154
Copy link

Doty1154 commented Dec 6, 2018

Huh, yeah i still get rate limited after the first couple hundred likes. i got like 40 gig down so far. Cries

@Laydmei
Copy link

Laydmei commented Dec 6, 2018

An idea of how to solve the problem? Please ?

@Hrxn
Copy link

Hrxn commented Dec 6, 2018

Did you make the code changes (as seen under Files changed)?

@cebtenzzre
Copy link
Collaborator

@Hrxn Lol, manually patching? Why not just clone aggroskater/tumblr-utils?

@Hrxn
Copy link

Hrxn commented Dec 7, 2018

Yes, obviously.
I meant to ask to verify if you are running indeed the changed version.
Did you verify the code changes..

@KlfJoat
Copy link

KlfJoat commented Dec 7, 2018

I just added this and it works. Great job!

@allefeld
Copy link

allefeld commented Dec 7, 2018

@aggroskater I used your version, and it works much better for likes than the original, thank you!

However, I still only successfully downloaded 5593 of 9626 likes, i.e. I'm missing 4033.

While downloading, the program listed 44 times "HTTP Error 403: Forbidden", most of them with URLs referring to the domain vtt.tumblr.com.
It listed 7 times "HTTP Error 404: Not Found", all of them referring to the domain 66.media.tumblr.com.
It listed 6 times "WARNING: Could not send HEAD request" with an URL beginning with https://www.tumblr.com/privacy/consent?redirect=. In these cases, messages followed saying that the program is "Falling back on generic information extractor." and "Unable to download video".

I understand that these 44 + 7 + 6 = 57 likes may simply be inaccessible (I tried some of the URLs manually and could verify that), but that accounts only for a tiny fraction of the 4033 likes that were skipped without warning.

Is there any chance you can fix this? If I can do something to help, please tell me.

@adamamyl
Copy link

adamamyl commented Dec 7, 2018

While downloading, the program listed 44 times "HTTP Error 403: Forbidden", most of them with URLs referring to the domain vtt.tumblr.com.

@allefeld If your behaviour is like what I've seen elsewhere with 403 errors on videos, I think it is that the videos no longer exist / are not accessible anymore – i.e. it's not a problem with this script or youtube-dl.

@Hrxn
Copy link

Hrxn commented Dec 8, 2018

A lot of videos are indeed 'removed', or to be more exact, blocked. That is, 403 is the expected result in those cases.
This was the big Tumblr video purge a while ago..

@allefeld
Copy link

allefeld commented Dec 8, 2018

@adamamyl @Hrxn you're probably right with your comments. However, that doesn't explain 4033 - 57 = 3976 likes aren't downloaded and don't have a corresponding error message.

@Doty1154
Copy link

Doty1154 commented Dec 9, 2018

If only it was possible just the download all of https://www.tumblr.com/likes while logged in via cli,

@allefeld
Copy link

allefeld commented Dec 10, 2018

I now think these inconsistencies have nothing to do with @bbolli's or @aggroskater's code, but with the extremely weird and unreliable tumblr API.

I experimented a bit with the API myself. I went through the list first using the query/next value for the next request, and found it skips over likes. I then used the field liked_timestamp from the last returned post, that worked a little better. I experimented with the limit parameter, and found that for a small value, resulting in a lot of requests, at some point the API simply starts to return 0 posts, even though I know that the requested time point has many likes before it. Mind you, no error message, just an "OK" response containing zero posts.

I used bbolli's code, aggroskaters's fork, https://github.com/javierarce/tumblr-liked-photos-export, and my own code, and I never arrive at the 9000+ likes, get different numbers of recovered posts each time and on different runs.

I'll be turning my back on tumblr soon, and just wanted to get my stuff out before the impending apocalypse. After the both social and technical blunders they commit, I have to say: Good riddance.

Sorry for venting. Thank you for your work!

@cherryband
Copy link
Contributor

I made the same thing some months ago but never thought of pulling over here. My implementation is #165 and it resolves the first 2 issues @aggroskater has and more. Hope it is helpful!

@aspensmonster
Copy link
Author

aspensmonster commented Dec 10, 2018

I'll take a stab at incorporating @qtwyeuritoiy's work into my fork. Between the fixes for the first two issues, and the fact that a tag index feature is now upstream --I've already rebased onto latest upstream-- my original issues are resolved.

@aspensmonster
Copy link
Author

I've got the pieces initially merged. It'll take a few hours to do a full grab and then test after liking some other posts.

Sidenote, it seems that the "mark as sensitive" feature, or whatever, is... no longer available in the desktop website's settings. I can't find it anywhere. That also might be playing havoc with downloading likes for all I know. Might break down and try the oauth approach at some point tonight/tomorrow. But that's its own can of worms that'll entail pulling in some library that can support oauth1.0a's HMAC signing mechanism on the requests.

@nightpool
Copy link

nightpool commented Dec 16, 2018 via email

@allefeld
Copy link

allefeld commented Dec 17, 2018

@Soundsgoood, actually you shouldn't have had to reinstall Python. pip install youtube-dl was sufficient for me.

Hey guys, trying this out, and it works great so far, surpassing the ~950 files I get from bbolli's version.
I want to try it with --save-audio and --save-video this time, so I've got a question for you all: How do I install youtube-dl for Windows? Putting the .exe from the youtube-dl page in the same folder as the tumblr_backup.py file doesn't work, (as in tumblr_backup.py throws me an error saying youtube_dl is not installed,) even though they're both in a PATH directory.

EDIT: Okay, I put the tar.gz in the python folder according to instructions from this thread, and I installed Python again on the same drive I have the OS and tumblr_backup folders in, and now it works.

@aspensmonster
Copy link
Author

According to Tumblr staff's latest post, things aren't getting deleted quite yet, but hidden from view. Perhaps historical explicit likes are still accessible to the user if logged in? I'll try to implement an oauth approach in the coming days to see if that's the case.

@Hrxn
Copy link

Hrxn commented Dec 19, 2018

@aggroskater Yeah, can confirm. Sensitive content is hidden by Tumblr's web front-end (and I assume it is the same for the mobile apps), but the API still returns the same results as before, including URLs to the image files or clips not visible in the browser anymore.
I've been using gallery-dl, which supports Tumblr with OAuth authentication. Without any external dependencies, if I may add.

@allefeld
Copy link

@aggroskater I made another attempt at saving a few more likes today, but now I'm getting an error message:

$ ./tumblr_backup.py --dirs --save-video --save-audio --likes --outdir=likes-new blogname
HTTP Error 403: Forbidden getting https://api.tumblr.com/v2/blog/blogname.tumblr.com/likes?reblog_info=true&api_key=8YUsKJvcJxo2MDwmWMDiXZGuMuIbeCwuQGP5ZHSEA4jBJPMnJT&limit=1
HTTP Error 403: Forbidden getting https://api.tumblr.com/v2/blog/blogname.tumblr.com/likes?reblog_info=true&api_key=8YUsKJvcJxo2MDwmWMDiXZGuMuIbeCwuQGP5ZHSEA4jBJPMnJT&limit=1
HTTP Error 403: Forbidden getting https://api.tumblr.com/v2/blog/blogname.tumblr.com/likes?reblog_info=true&api_key=8YUsKJvcJxo2MDwmWMDiXZGuMuIbeCwuQGP5ZHSEA4jBJPMnJT&limit=1
HTTP Error 403: Forbidden getting https://api.tumblr.com/v2/blog/blogname.tumblr.com/likes?reblog_info=true&api_key=8YUsKJvcJxo2MDwmWMDiXZGuMuIbeCwuQGP5ZHSEA4jBJPMnJT&limit=1
HTTP Error 403: Forbidden getting https://api.tumblr.com/v2/blog/blogname.tumblr.com/likes?reblog_info=true&api_key=8YUsKJvcJxo2MDwmWMDiXZGuMuIbeCwuQGP5ZHSEA4jBJPMnJT&limit=1
HTTP Error 403: Forbidden getting https://api.tumblr.com/v2/blog/blogname.tumblr.com/likes?reblog_info=true&api_key=8YUsKJvcJxo2MDwmWMDiXZGuMuIbeCwuQGP5ZHSEA4jBJPMnJT&limit=1
HTTP Error 403: Forbidden getting https://api.tumblr.com/v2/blog/blogname.tumblr.com/likes?reblog_info=true&api_key=8YUsKJvcJxo2MDwmWMDiXZGuMuIbeCwuQGP5ZHSEA4jBJPMnJT&limit=1
HTTP Error 403: Forbidden getting https://api.tumblr.com/v2/blog/blogname.tumblr.com/likes?reblog_info=true&api_key=8YUsKJvcJxo2MDwmWMDiXZGuMuIbeCwuQGP5ZHSEA4jBJPMnJT&limit=1
HTTP Error 403: Forbidden getting https://api.tumblr.com/v2/blog/blogname.tumblr.com/likes?reblog_info=true&api_key=8YUsKJvcJxo2MDwmWMDiXZGuMuIbeCwuQGP5ZHSEA4jBJPMnJT&limit=1
HTTP Error 403: Forbidden getting https://api.tumblr.com/v2/blog/blogname.tumblr.com/likes?reblog_info=true&api_key=8YUsKJvcJxo2MDwmWMDiXZGuMuIbeCwuQGP5ZHSEA4jBJPMnJT&limit=1

(blog name changed for privacy)

Did they block your API key by any chance?

@allefeld
Copy link

Apparently not... I tried it with my own key and the same thing happens.

@allefeld
Copy link

@Hrxn, I tried gallery-dl, which downloaded the photos from my posts just fine, but I can't figure out how to download likes. Can you give me a hint?

@Hrxn
Copy link

Hrxn commented Dec 19, 2018

@allefeld Well, gallery-dl determines the selection of the appropriate extractor by matching it against specific URL patterns, this includes support for different variants. I think this should be enough here:
gallery-dl http(s)://<blog-name>.tumblr.com/likes

@allefeld
Copy link

@allefeld Well, gallery-dl determines the selection of the appropriate extractor by matching it against specific URL patterns, this includes support for different variants. I think this should be enough here:
gallery-dl http(s)://<blog-name>.tumblr.com/likes

That's what I thought, but I'm getting

[tumblr][error] You do not have permission to access the resource at 'https://<blog-name>.tumblr.com/likes'

though I used -u and -p options. Adding -v reveals that gallery-dl does not use OAuth (but simply API key) for likes.

Pity, but thanks!

@ytguwn
Copy link

ytguwn commented Dec 20, 2018

Unfortunately, it is not possible to fetch all likes even with OAuth.

I tried using gallery-dl to do that. You need to configure it to make it OAuth-enabled. I also modified its code to use OAuth secured API endpoint (/user/likes instead of /blog/<name>/likes). This hasn't brought me any new content though.

The same likes are also missing from web interface (tumblr.com/likes).

Note: I have ~500 likes on tumblr and currently I am able to grab only half of them (and this number diminishes over time).

@allefeld
Copy link

Well, I guess it's time to say goodbye to tumblr for good...

@ytguwn
Copy link

ytguwn commented Dec 26, 2018

FWIW there exists an unofficial Twitter Tumblr SVC API (see #161).

I don't know if it's useful for grabbing likes.

@cebtenzzre
Copy link
Collaborator

@ytguwn Assuming you mean Tumblr, it probably isn't. Private likes can be scraped via OAuth (see #200 for something similar), and the likes that aren't found with this probably aren't available via any API (possibly deleted posts).

@arete06
Copy link

arete06 commented Sep 29, 2020

I get the following error when the script is almost finishing:


Traceback (most recent call last):                                         
  File "tumblr_backup.py", line 1335, in <module>
    tb.backup(account)
  File "tumblr_backup.py", line 692, in backup
    before_timestamp = soup['response']['_links']['next']['query_params']['before']
KeyError: 'next'

@cebtenzzre
Copy link
Collaborator

@sldx12 I was able to reproduce this error. The API must've changed because I know this fork worked before. Basically, the API no longer gives the script a pointer to the next (empty) batch of liked posts, but this fork still expects it to be there (and fails when it isn't).
It can be fixed by removing the text and not soup['response']['liked_posts'], moving the try/except block to before the sleep and dedenting it once, and removing the now-empty else clause. I'll update my fork to match.

cebtenzzre added a commit to cebtenzzre/tumblr-utils that referenced this pull request Sep 30, 2020
@aspensmonster
Copy link
Author

I'll take a look. I haven't edited my script locally much in the past two years, but it has been working as recently as yesterday.

@aspensmonster
Copy link
Author

I just re-ran my script locally. No issues. Judging by the error message, I wonder if the soup['meta']['status'] bit wasn't 200 at the time of running:

                        try:
                            before_timestamp = soup['response']['_links']['next']['query_params']['before']
                        except KeyError:
                            if soup['meta']['status'] == 200 and not soup['response']['liked_posts']:
                                finished_with_likes = True
                                continue
                            else:
                                raise

The except KeyError bit is meant to handle the lack of next element, and the only way I think you'd see a raised KeyError: 'next' is if either the meta status wasn't 200 or soup['response']['liked_posts'] actually was present, causing the raise in the else bit to get hit. I honestly don't recall what the soup['response']['liked_posts'] bit was about though.

@cebtenzzre
Copy link
Collaborator

@aggroskater soup['meta']['status'] is guaranteed to be 200 here, so the check is redundant anyway. If status is non-200, soup would be None and the # try the next batch branch would exit the loop early.

not soup['response']['liked_posts'] evaluates to true only if the batch is empty, so the code only allows a clean exit if the final empty batch (which would usually hit if not posts -> break) is the first one to lack a next link. In my testing, it seems that (for some of us, at least) the API no longer returns the next link in the last non-empty batch. I still get all of my liked posts according to the liked_count, so terminating after backing up the first batch that lacks a next link doesn't miss anything.

I made a test blog that can reproduce this issue - try python2 tumblr_backup.py --likes erablenous, and you should see the exception, assuming your API key works just like sldx12's and mine.

@arete06
Copy link

arete06 commented Sep 30, 2020

@cebtenzzre when trying to run your most recent commit I stumble upon the following (I have tried to import it couldn't do it):

Traceback (most recent call last):
  File "tumblr_backup.py", line 28, in <module>
    from util import ConnectionFile, HAVE_SSL_CTX, HTTP_TIMEOUT, LockedQueue, PY3, nullcontext, to_bytes, to_unicode
ImportError: No module named util

I also tried your fix in

It can be fixed by removing the text and not soup['response']['liked_posts'], moving the try/except block to before the sleep and dedenting it once, and removing the now-empty else clause. I'll update my fork to match.
but it gave the following error:

Traceback (most recent call last):                                
  File "tumblr_backup.py", line 1255, in <module>
    tb.backup(account)
  File "tumblr_backup.py", line 645, in backup
    j = min(i + MAX_LIKES, last_post)
NameError: global name 'MAX_LIKES' is not defined

@aspensmonster
Copy link
Author

@cebtenzzre I was able to replicate the issue on the test blog you gave with my version of the script. I confirmed that my local script is basically the same as the one at my fork's master, just with a different API key and an added time.sleep(10) when backing up posts (not likes; guess I was backing up some big blogs a while back and didn't want to blow through my API key's quota). Merely removing the and not soup['response']['liked_posts'] permitted the script to complete for me. Now I'm curious as to why I was able to back up my own blog's likes last night without issue. Presumably, the last page of results would be non-empty too (and missing the next piece) and I should have hit the same exception. Unless of course the API is behaving differently depending on the blog that's queried 🤷

What you're describing is jogging my memory a bit though. I'm assuming when I first wrote this that the API would always have a next link, even on the "last" page of results. So the loop would run once more, the API would return an empty page, no next element, and my logic would say "oh, it's a 200 response AND there's no next link AND the list is empty, so really that just means we're done."

I'm going to try again on my own blog after adding only one like. In theory the API should return one page with a single like, no next element in the _links piece, and fail. If not, well, then I'll at least have the two different API responses from my blog and erablemous to compare and scratch my head with.

@aspensmonster
Copy link
Author

Hmmm. I'm guessing that the reason it looks like it's "working" for me is because I was doing an incremental backup of likes, and not a full run. And the like backup is starting from the latest likes and working backward in time. And since I'm doing an incremental backup, I just stop after reaching ident_max, well before I get to the first ever like. And in that scenario the next element (which is actually providing a before timestamp to keep going backwards (seriously, what was I thinking when I wrote this damn thing?)) obviously still exists. But for a full run, you get to the earliest set of likes and then... no more next element when the last, most early page of results is returned.

I'm guessing at some point maybe the API would have given a "yeah, here's the link to 'before the earliest like ever', it's an empty page, have fun" response. And now it doesn't. Regardless, taking off the and not soup['response']['liked_posts'] bit on Line 694 still leaves my incremental backups working, and it also backs up the erablenous blog too. So that would be the "fix" if you're rocking this two-year-old script.

But it definitely looks like you've done a significant amount of work on the project, @cebtenzzre , so I'll probably look into your fork and/or the original @bbolli repo and see if there's a newer, better way for me to keep clinging to my tumblr likes :P

@cebtenzzre
Copy link
Collaborator

cebtenzzre commented Sep 30, 2020

@sldx12 You'll have to download my fork as a zip or use git clone, because there are new dependencies like util.py, wget.py, and note_scraper.py. And are you mixing and matching scripts? MAX_LIKES is defined at the top of aggroskater's version (this is the PR you're commenting on), which is the one I meant for you to modify.
@aggroskater Yeah, incremental backups stop early, so they would never see this issue (which happens at the end of the backup). TBH your fork still works better for likes than upstream - the offset parameter that upstream still uses gets stuck in an endless loop (with an unmerged workaround). But please check out my fork, it's full of neat things I've added in the two years I've been using tumblr-utils, and recently got support for 1000+ likes that works similarly to this PR.

@arete06
Copy link

arete06 commented Sep 30, 2020

@cebtenzzre Yeah my bad, I've tried so many stuff that I get confused. I tried your version and it didn't download your likes so I opened a new issue (here).
As to the MAX_LIKES error, isn't that what you told me to do? I modified the @aggroskater version, not yours.

@cebtenzzre
Copy link
Collaborator

cebtenzzre commented Sep 30, 2020

@sldx12 Basically, there is no way to get that NameError unless you either deleted the line MAX_LIKES = 20, or you started from the wrong script. Either way, the line numbers in that traceback don't match the script from this PR at all. They actually match bbolli's version most closely, which would make sense if you got this PR confused with another one. Or, maybe you deleted the wrong else clause, who knows - regardless, you should just use my fork since it has this change already.

@arete06
Copy link

arete06 commented Sep 30, 2020

@cebtenzzre Ok nevermind, I tried again. I got the same error as with your PR, only with different numbers. So, if you want to reply here or in the issue I created in your PR it's the same for me, I just want it to work.
Error (I have around 1410 liked posts):

blogname: 1078 posts backed up

cebtenzzre added a commit to cebtenzzre/tumblr-utils that referenced this pull request Oct 2, 2020
cebtenzzre added a commit to cebtenzzre/tumblr-utils that referenced this pull request Nov 25, 2020
cebtenzzre added a commit to cebtenzzre/tumblr-utils that referenced this pull request Jan 17, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.