https://github.com/JustAnotherArchivist/snscrape Download and run the Python installer. https://www.python.org/downloads/ Windows Go to command line. Run as Administrator. Install: pip3 install snscrape If you want to use the development version: pip3 install git+https://github.com/JustAnotherArchivist/snscrape.git At command prompt: snscrape --jsonl twitter-hashtag lasertwitter >c:\tweets.json snscrape --jsonl twitter-hashtag onomoly >c:\tweets-onomoly.json Tweets since a specific date: snscrape --jsonl --since 2022-12-25 twitter-hashtag lasertwitter >c:\newtweets.json Run the results through the JSON Lines to JSON converter: https://www.convertjson.com/jsonlines-to-json.htm ---------------- Usage CLI The generic syntax of snscrape's CLI is: snscrape [GLOBAL-OPTIONS] SCRAPER-NAME [SCRAPER-OPTIONS] [SCRAPER-ARGUMENTS...] snscrape --help and snscrape SCRAPER-NAME --help provide details on the options and arguments. snscrape --help also lists all available scrapers. The default output of the CLI is the URL of each result. Some noteworthy global options are: --jsonl to get output as JSONL. This includes all information extracted by snscrape (e.g. message content, datetime, images; details vary by scraper). --max-results NUMBER to only return the first NUMBER results. --with-entity to get an item on the entity being scraped, e.g. the user or channel. This is not supported on all scrapers. (You can use this together with --max-results 0 to only fetch the entity info.) Examples Collect all tweets by Jason Scott (@textfiles): snscrape twitter-user textfiles It's usually useful to redirect the output to a file for further processing, e.g. in bash using the filename twitter-@textfiles: snscrape twitter-user textfiles >twitter-@textfiles To get the latest 100 tweets with the hashtag #archiveteam: snscrape --max-results 100 twitter-hashtag archiveteam