Projekt

Obecné

Profil

DatasetCrawler » Historie » Revize 5

Revize 4 (Petr Hlaváč, 2020-05-27 08:43) → Revize 5/11 (Petr Hlaváč, 2020-05-27 08:47)

h1. DatasetCrawler 

 Složka obsahuje implementace crawleru pro jednotlivé datasety. Crawlery jsou dynamicky importovány je tedy proto nutné dodržet pojemnování *"dataset-name"_crawler.py*. 

 Je velmi žádoucí aby crawler pro stažení souboru používal funkci *basic_crawler_functions.download_file_from_url("url_souboru", "jmeno_datasetu")*. 
 Ta totiž ukládá do databáze zmínku o tom, které soubory byly již staženy aby nedocházelo k duplicitám nebo zbytečnému stahování. 

 h2. Generovaný crawler 

 Při použití skriptu *PrepareNewDataset.py* vznikne následující kostra pro doplnění funkčnosti. 

 <pre> 
 # Path to crawled data 
 CRAWLED_DATA_PATH = "CrawledData/" 


 def crawl(config): 
     """ 
     Implement crawl method that downloads new data to path_for_files 
     For keeping the project structure 
     url , regex, and dataset_name from config 
     You can use already implemented functions from Utilities/Crawler/BasicCrawlerFunctions.py 

     Args: 
         config: loaded configuration file of dataset 
     """ 
     dataset_name = config["dataset-name"] 
     url = config['url'] 
     regex = config['regex'] 
     path_for_files = CRAWLED_DATA_PATH + dataset_name + '/' 
     print("You must implements Crawl method first!") 

 </pre> 

 h2. Vzorově implementovaný crawler 

 <pre> 
 from Utilities import folder_processor 
 from Utilities.Crawler import basic_crawler_functions 

 # Path to crawled data 
 CRAWLED_DATA_PATH = "CrawledData/" 


 def crawl(config): 
     """ 
     Implement crawl method that downloads new data to path_for_files 
     For keeping the project structure 
     url , regex, and dataset_name from config 
     You can use already implemented functions from Utilities/Crawler/basic_crawler_functions.py 

     Args: 
         config: loaded configuration file of dataset 
     """ 
     dataset_name = config["dataset-name"] 
     url = config['url'] 
     regex = config['regex'] 
     path_for_files = CRAWLED_DATA_PATH + dataset_name + '/' 

     first_level_links = basic_crawler_functions.get_all_links(url) 
     filtered_first_level_links = basic_crawler_functions.filter_links(first_level_links, "^OD_ZCU") 
     absolute_first_level_links = basic_crawler_functions.create_absolute_links(filtered_first_level_links, url) 

     files = [] 

     for link in absolute_first_level_links: 
         second_level_links = basic_crawler_functions.get_all_links(link) 
         filtered_second_level_links = basic_crawler_functions.filter_links(second_level_links, regex) 
         absolute_second_level_links = basic_crawler_functions.create_absolute_links(filtered_second_level_links, link) 
         final_links = basic_crawler_functions.remove_downloaded_links(absolute_second_level_links, dataset_name) 

         for file_link in final_links: 
             files.append(file_link) 

     for file in files: 
         basic_crawler_functions.download_file_from_url(file, dataset_name) 

     folder_processor.unzip_all_csv_zip_files_in_folder(path_for_files) 

 </pre>