Projekt

Obecné

Profil

DatasetCrawler » Historie » Verze 7

Petr Hlaváč, 2020-05-27 08:49

1 1 Petr Hlaváč
h1. DatasetCrawler
2
3 3 Petr Hlaváč
Složka obsahuje implementace crawleru pro jednotlivé datasety. Crawlery jsou dynamicky importovány je tedy proto nutné dodržet pojemnování *"dataset-name"_crawler.py*.
4 1 Petr Hlaváč
5 2 Petr Hlaváč
Je velmi žádoucí aby crawler pro stažení souboru používal funkci *basic_crawler_functions.download_file_from_url("url_souboru", "jmeno_datasetu")*.
6 4 Petr Hlaváč
Ta totiž ukládá do databáze zmínku o tom, které soubory byly již staženy aby nedocházelo k duplicitám nebo zbytečnému stahování.
7 1 Petr Hlaváč
8
h2. Generovaný crawler
9
10
Při použití skriptu *PrepareNewDataset.py* vznikne následující kostra pro doplnění funkčnosti.
11
12
<pre>
13
# Path to crawled data
14
CRAWLED_DATA_PATH = "CrawledData/"
15
16
17
def crawl(config):
18
    """
19
    Implement crawl method that downloads new data to path_for_files
20
    For keeping the project structure
21
    url , regex, and dataset_name from config
22
    You can use already implemented functions from Utilities/Crawler/BasicCrawlerFunctions.py
23
24
    Args:
25
        config: loaded configuration file of dataset
26
    """
27
    dataset_name = config["dataset-name"]
28
    url = config['url']
29
    regex = config['regex']
30
    path_for_files = CRAWLED_DATA_PATH + dataset_name + '/'
31
    print("You must implements Crawl method first!")
32
33
</pre>
34 5 Petr Hlaváč
35
h2. Vzorově implementovaný crawler
36 7 Petr Hlaváč
37 6 Petr Hlaváč
Pro ukázku byl zvonel crawler pro dataset koloběžek. Je zde využito hlavně funkcí z *Utilities.Crawler.basic_crawler_functions*.
38
39 5 Petr Hlaváč
40
<pre>
41
from Utilities import folder_processor
42
from Utilities.Crawler import basic_crawler_functions
43
44
# Path to crawled data
45
CRAWLED_DATA_PATH = "CrawledData/"
46
47
48
def crawl(config):
49
    """
50
    Implement crawl method that downloads new data to path_for_files
51
    For keeping the project structure
52
    url , regex, and dataset_name from config
53
    You can use already implemented functions from Utilities/Crawler/basic_crawler_functions.py
54
55
    Args:
56
        config: loaded configuration file of dataset
57
    """
58
    dataset_name = config["dataset-name"]
59
    url = config['url']
60
    regex = config['regex']
61
    path_for_files = CRAWLED_DATA_PATH + dataset_name + '/'
62
63
    first_level_links = basic_crawler_functions.get_all_links(url)
64
    filtered_first_level_links = basic_crawler_functions.filter_links(first_level_links, "^OD_ZCU")
65
    absolute_first_level_links = basic_crawler_functions.create_absolute_links(filtered_first_level_links, url)
66
67
    files = []
68
69
    for link in absolute_first_level_links:
70
        second_level_links = basic_crawler_functions.get_all_links(link)
71
        filtered_second_level_links = basic_crawler_functions.filter_links(second_level_links, regex)
72
        absolute_second_level_links = basic_crawler_functions.create_absolute_links(filtered_second_level_links, link)
73
        final_links = basic_crawler_functions.remove_downloaded_links(absolute_second_level_links, dataset_name)
74
75
        for file_link in final_links:
76
            files.append(file_link)
77
78
    for file in files:
79
        basic_crawler_functions.download_file_from_url(file, dataset_name)
80
81
    folder_processor.unzip_all_csv_zip_files_in_folder(path_for_files)
82
83
</pre>