Projekt

Obecné

Profil

DatasetCrawler » Historie » Verze 5

Petr Hlaváč, 2020-05-27 08:47

1 1 Petr Hlaváč
h1. DatasetCrawler
2
3 3 Petr Hlaváč
Složka obsahuje implementace crawleru pro jednotlivé datasety. Crawlery jsou dynamicky importovány je tedy proto nutné dodržet pojemnování *"dataset-name"_crawler.py*.
4 1 Petr Hlaváč
5 2 Petr Hlaváč
Je velmi žádoucí aby crawler pro stažení souboru používal funkci *basic_crawler_functions.download_file_from_url("url_souboru", "jmeno_datasetu")*.
6 4 Petr Hlaváč
Ta totiž ukládá do databáze zmínku o tom, které soubory byly již staženy aby nedocházelo k duplicitám nebo zbytečnému stahování.
7 1 Petr Hlaváč
8
h2. Generovaný crawler
9
10
Při použití skriptu *PrepareNewDataset.py* vznikne následující kostra pro doplnění funkčnosti.
11
12
<pre>
13
# Path to crawled data
14
CRAWLED_DATA_PATH = "CrawledData/"
15
16
17
def crawl(config):
18
    """
19
    Implement crawl method that downloads new data to path_for_files
20
    For keeping the project structure
21
    url , regex, and dataset_name from config
22
    You can use already implemented functions from Utilities/Crawler/BasicCrawlerFunctions.py
23
24
    Args:
25
        config: loaded configuration file of dataset
26
    """
27
    dataset_name = config["dataset-name"]
28
    url = config['url']
29
    regex = config['regex']
30
    path_for_files = CRAWLED_DATA_PATH + dataset_name + '/'
31
    print("You must implements Crawl method first!")
32
33
</pre>
34 5 Petr Hlaváč
35
h2. Vzorově implementovaný crawler
36
37
<pre>
38
from Utilities import folder_processor
39
from Utilities.Crawler import basic_crawler_functions
40
41
# Path to crawled data
42
CRAWLED_DATA_PATH = "CrawledData/"
43
44
45
def crawl(config):
46
    """
47
    Implement crawl method that downloads new data to path_for_files
48
    For keeping the project structure
49
    url , regex, and dataset_name from config
50
    You can use already implemented functions from Utilities/Crawler/basic_crawler_functions.py
51
52
    Args:
53
        config: loaded configuration file of dataset
54
    """
55
    dataset_name = config["dataset-name"]
56
    url = config['url']
57
    regex = config['regex']
58
    path_for_files = CRAWLED_DATA_PATH + dataset_name + '/'
59
60
    first_level_links = basic_crawler_functions.get_all_links(url)
61
    filtered_first_level_links = basic_crawler_functions.filter_links(first_level_links, "^OD_ZCU")
62
    absolute_first_level_links = basic_crawler_functions.create_absolute_links(filtered_first_level_links, url)
63
64
    files = []
65
66
    for link in absolute_first_level_links:
67
        second_level_links = basic_crawler_functions.get_all_links(link)
68
        filtered_second_level_links = basic_crawler_functions.filter_links(second_level_links, regex)
69
        absolute_second_level_links = basic_crawler_functions.create_absolute_links(filtered_second_level_links, link)
70
        final_links = basic_crawler_functions.remove_downloaded_links(absolute_second_level_links, dataset_name)
71
72
        for file_link in final_links:
73
            files.append(file_link)
74
75
    for file in files:
76
        basic_crawler_functions.download_file_from_url(file, dataset_name)
77
78
    folder_processor.unzip_all_csv_zip_files_in_folder(path_for_files)
79
80
</pre>